title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Part II. Clair on Red Hat Quay
Part II. Clair on Red Hat Quay This guide contains procedures for running Clair on Red Hat Quay in both standalone and OpenShift Container Platform Operator deployments.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/vulnerability_reporting_with_clair_on_red_hat_quay/testing-clair-with-quay
Chapter 3. Distribution of content in RHEL 9
Chapter 3. Distribution of content in RHEL 9 3.1. Installation Red Hat Enterprise Linux 9 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Installation ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. On the Product Downloads page, the Installation ISO is referred to as Binary DVD . Note The Installation ISO image is in multiple GB size, and as a result, it might not fit on optical media formats. A USB key or USB hard drive is recommended when using the Installation ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Installation ISO image. You can also register to Red Hat CDN or Satellite during the installation to use the latest BaseOS and AppStream content from Red Hat CDN or Satellite. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 9 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For more information, see the Scope of Coverage Details document. Content in the AppStream repository includes additional user-space applications, runtime languages, and databases in support of the varied workloads and use cases. In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 9 repositories and the packages they provide, see the Package manifest . 3.3. Application Streams Multiple versions of user-space components are delivered as Application Streams and updated more frequently than the core operating system packages. This provides greater flexibility to customize RHEL without impacting the underlying stability of the platform or specific deployments. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules, as Software Collections, or as Flatpaks. Each Application Stream component has a given life cycle, either the same as RHEL 9 or shorter. For RHEL life cycle information, see Red Hat Enterprise Linux Life Cycle . RHEL 9 improves the Application Streams experience by providing initial Application Stream versions that can be installed as RPM packages using the traditional dnf install command. Note Certain initial Application Streams in the RPM format have a shorter life cycle than Red Hat Enterprise Linux 9. Some additional Application Stream versions will be distributed as modules with a shorter life cycle in future minor RHEL 9 releases. Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Always determine what version of an Application Stream you want to install and make sure to review the Red Hat Enterprise Linux Application Stream Lifecycle first. Content that needs rapid updating, such as alternate compilers and container tools, is available in rolling streams that will not provide alternative versions in parallel. Rolling streams may be packaged as RPMs or modules. For information about Application Streams available in RHEL 9 and their application compatibility level, see the Package manifest . Application compatibility levels are explained in the Red Hat Enterprise Linux 9: Application Compatibility Guide document. 3.4. Package management with YUM/DNF In Red Hat Enterprise Linux 9, software installation is ensured by DNF . Red Hat continues to support the usage of the yum term for consistency with major versions of RHEL. If you type dnf instead of yum , the command works as expected because both are aliases for compatibility. Although RHEL 8 and RHEL 9 are based on DNF , they are compatible with YUM used in RHEL 7. For more information, see Managing software with the DNF tool .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.2_release_notes/distribution
Chapter 3. Using system-wide cryptographic policies
Chapter 3. Using system-wide cryptographic policies The system-wide cryptographic policies is a system component that configures the core cryptographic subsystems, covering the TLS, IPsec, SSH, DNSSec, and Kerberos protocols. It provides a small set of policies, which the administrator can select. 3.1. System-wide cryptographic policies When a system-wide policy is set up, applications in RHEL follow it and refuse to use algorithms and protocols that do not meet the policy, unless you explicitly request the application to do so. That is, the policy applies to the default behavior of applications when running with the system-provided configuration but you can override it if required. RHEL 8 contains the following predefined policies: DEFAULT The default system-wide cryptographic policy level offers secure settings for current threat models. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 2048 bits long. LEGACY Ensures maximum compatibility with Red Hat Enterprise Linux 5 and earlier; it is less secure due to an increased attack surface. In addition to the DEFAULT level algorithms and protocols, it includes support for the TLS 1.0 and 1.1 protocols. The algorithms DSA, 3DES, and RC4 are allowed, while RSA keys and Diffie-Hellman parameters are accepted if they are at least 1023 bits long. FUTURE A stricter forward-looking security level intended for testing a possible future policy. This policy does not allow the use of SHA-1 in signature algorithms. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 3072 bits long. If your system communicates on the public internet, you might face interoperability problems. Important Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT cryptographic policy while connecting to the Customer Portal API. FIPS Conforms with the FIPS 140 requirements. The fips-mode-setup tool, which switches the RHEL system into FIPS mode, uses this policy internally. Switching to the FIPS policy does not guarantee compliance with the FIPS 140 standard. You also must re-generate all cryptographic keys after you set the system to FIPS mode. This is not possible in many scenarios. RHEL also provides the FIPS:OSPP system-wide subpolicy, which contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification. The system becomes less interoperable after you set this subpolicy. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP , communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate. Note Your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides. Red Hat continuously adjusts all policy levels so that all libraries provide secure defaults, except when using the LEGACY policy. Even though the LEGACY profile does not provide secure defaults, it does not include any algorithms that are easily exploitable. As such, the set of enabled algorithms or acceptable key sizes in any provided policy may change during the lifetime of Red Hat Enterprise Linux. Such changes reflect new security standards and new security research. If you must ensure interoperability with a specific system for the whole lifetime of Red Hat Enterprise Linux, you should opt-out from the system-wide cryptographic policies for components that interact with that system or re-enable specific algorithms using custom cryptographic policies. The specific algorithms and ciphers described as allowed in the policy levels are available only if an application supports them: Table 3.1. Cipher suites and protocols enabled in the cryptographic policies LEGACY DEFAULT FIPS FUTURE IKEv1 no no no no 3DES yes no no no RC4 yes no no no DH min. 1024-bit min. 2048-bit min. 2048-bit [a] min. 3072-bit RSA min. 1024-bit min. 2048-bit min. 2048-bit min. 3072-bit DSA yes no no no TLS v1.0 yes no no no TLS v1.1 yes no no no SHA-1 in digital signatures yes yes no no CBC mode ciphers yes yes yes no [b] Symmetric ciphers with keys < 256 bits yes yes yes no SHA-1 and SHA-224 signatures in certificates yes yes yes no [a] You can use only Diffie-Hellman groups defined in RFC 7919 and RFC 3526. [b] CBC ciphers are disabled for TLS. In a non-TLS scenario, AES-128-CBC is disabled but AES-256-CBC is enabled. To disable also AES-256-CBC , apply a custom subpolicy. Additional resources crypto-policies(7) and update-crypto-policies(8) man pages on your system Product compliance (Red Hat Customer Portal) 3.2. Changing the system-wide cryptographic policy You can change the system-wide cryptographic policy on your system by using the update-crypto-policies tool and restarting your system. Prerequisites You have root privileges on the system. Procedure Optional: Display the current cryptographic policy: Set the new cryptographic policy: Replace <POLICY> with the policy or subpolicy you want to set, for example FUTURE , LEGACY or FIPS:OSPP . Restart the system: Verification Display the current cryptographic policy: Additional resources For more information on system-wide cryptographic policies, see System-wide cryptographic policies 3.3. Switching the system-wide cryptographic policy to mode compatible with earlier releases The default system-wide cryptographic policy in Red Hat Enterprise Linux 8 does not allow communication using older, insecure protocols. For environments that require to be compatible with Red Hat Enterprise Linux 6 and in some cases also with earlier releases, the less secure LEGACY policy level is available. Warning Switching to the LEGACY policy level results in a less secure system and applications. Procedure To switch the system-wide cryptographic policy to the LEGACY level, enter the following command as root : Additional resources For the list of available cryptographic policy levels, see the update-crypto-policies(8) man page on your system. For defining custom cryptographic policies, see the Custom Policies section in the update-crypto-policies(8) man page and the Crypto Policy Definition Format section in the crypto-policies(7) man page on your system. 3.4. Setting up system-wide cryptographic policies in the web console You can set one of system-wide cryptographic policies and subpolicies directly in the RHEL web console interface. Besides the four predefined system-wide cryptographic policies, you can also apply the following combinations of policies and subpolicies through the graphical interface now: DEFAULT:SHA1 The DEFAULT policy with the SHA-1 algorithm enabled. LEGACY:AD-SUPPORT The LEGACY policy with less secure settings that improve interoperability for Active Directory services. FIPS:OSPP The FIPS policy with further restrictions required by the Common Criteria for Information Technology Security Evaluation standard. Warning Because the FIPS:OSPP system-wide subpolicy contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification, the system is less interoperable after you set it. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP , communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate. Note that your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See the Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides hosted at the National Information Assurance Partnership (NIAP) website. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Configuration card of the Overview page, click your current policy value to Crypto policy . In the Change crypto policy dialog window, click on the policy you want to start using on your system. Click the Apply and reboot button. Verification After the restart, log back in to web console, and check that the Crypto policy value corresponds to the one you selected. Alternatively, you can enter the update-crypto-policies --show command to display the current system-wide cryptographic policy in your terminal. 3.5. Excluding an application from following system-wide cryptographic policies You can customize cryptographic settings used by your application preferably by configuring supported cipher suites and protocols directly in the application. You can also remove a symlink related to your application from the /etc/crypto-policies/back-ends directory and replace it with your customized cryptographic settings. This configuration prevents the use of system-wide cryptographic policies for applications that use the excluded back end. Furthermore, this modification is not supported by Red Hat. 3.5.1. Examples of opting out of the system-wide cryptographic policies wget To customize cryptographic settings used by the wget network downloader, use --secure-protocol and --ciphers options. For example: See the HTTPS (SSL/TLS) Options section of the wget(1) man page for more information. curl To specify ciphers used by the curl tool, use the --ciphers option and provide a colon-separated list of ciphers as a value. For example: See the curl(1) man page for more information. Firefox Even though you cannot opt out of system-wide cryptographic policies in the Firefox web browser, you can further restrict supported ciphers and TLS versions in Firefox's Configuration Editor. Type about:config in the address bar and change the value of the security.tls.version.min option as required. Setting security.tls.version.min to 1 allows TLS 1.0 as the minimum required, security.tls.version.min 2 enables TLS 1.1, and so on. OpenSSH To opt out of the system-wide cryptographic policies for your OpenSSH server, uncomment the line with the CRYPTO_POLICY= variable in the /etc/sysconfig/sshd file. After this change, values that you specify in the Ciphers , MACs , KexAlgoritms , and GSSAPIKexAlgorithms sections in the /etc/ssh/sshd_config file are not overridden. See the sshd_config(5) man page for more information. To opt out of system-wide cryptographic policies for your OpenSSH client, perform one of the following tasks: For a given user, override the global ssh_config with a user-specific configuration in the ~/.ssh/config file. For the entire system, specify the cryptographic policy in a drop-in configuration file located in the /etc/ssh/ssh_config.d/ directory, with a two-digit number prefix smaller than 5, so that it lexicographically precedes the 05-redhat.conf file, and with a .conf suffix, for example, 04-crypto-policy-override.conf . See the ssh_config(5) man page for more information. Libreswan See the Configuring IPsec connections that opt out of the system-wide crypto policies in the Securing networks document for detailed information. Additional resources update-crypto-policies(8) man page on your system 3.6. Customizing system-wide cryptographic policies with subpolicies Use this procedure to adjust the set of enabled cryptographic algorithms or protocols. You can either apply custom subpolicies on top of an existing system-wide cryptographic policy or define such a policy from scratch. The concept of scoped policies allows enabling different sets of algorithms for different back ends. You can limit each configuration directive to specific protocols, libraries, or services. Furthermore, directives can use asterisks for specifying multiple values using wildcards. The /etc/crypto-policies/state/CURRENT.pol file lists all settings in the currently applied system-wide cryptographic policy after wildcard expansion. To make your cryptographic policy more strict, consider using values listed in the /usr/share/crypto-policies/policies/FUTURE.pol file. You can find example subpolicies in the /usr/share/crypto-policies/policies/modules/ directory. The subpolicy files in this directory contain also descriptions in lines that are commented out. Note Customization of system-wide cryptographic policies is available from RHEL 8.2. You can use the concept of scoped policies and the option of using wildcards in RHEL 8.5 and newer. Procedure Checkout to the /etc/crypto-policies/policies/modules/ directory: Create subpolicies for your adjustments, for example: Important Use upper-case letters in file names of policy modules. Open the policy modules in a text editor of your choice and insert options that modify the system-wide cryptographic policy, for example: Save the changes in the module files. Apply your policy adjustments to the DEFAULT system-wide cryptographic policy level: To make your cryptographic settings effective for already running services and applications, restart the system: Verification Check that the /etc/crypto-policies/state/CURRENT.pol file contains your changes, for example: Additional resources Custom Policies section in the update-crypto-policies(8) man page on your system Crypto Policy Definition Format section in the crypto-policies(7) man page on your system How to customize crypto policies in RHEL 8.2 Red Hat blog article 3.7. Disabling SHA-1 by customizing a system-wide cryptographic policy Because the SHA-1 hash function has an inherently weak design, and advancing cryptanalysis has made it vulnerable to attacks, RHEL 8 does not use SHA-1 by default. Nevertheless, some third-party applications, for example, public signatures, still use SHA-1. To disable the use of SHA-1 in signature algorithms on your system, you can use the NO-SHA1 policy module. Important The NO-SHA1 policy module disables the SHA-1 hash function only in signatures and not elsewhere. In particular, the NO-SHA1 module still allows the use of SHA-1 with hash-based message authentication codes (HMAC). This is because HMAC security properties do not rely on the collision resistance of the corresponding hash function, and therefore the recent attacks on SHA-1 have a significantly lower impact on the use of SHA-1 for HMAC. If your scenario requires disabling a specific key exchange (KEX) algorithm combination, for example, diffie-hellman-group-exchange-sha1 , but you still want to use both the relevant KEX and the algorithm in other combinations, see the Red Hat Knowledgebase solution Steps to disable the diffie-hellman-group1-sha1 algorithm in SSH for instructions on opting out of system-wide crypto-policies for SSH and configuring SSH directly. Note The module for disabling SHA-1 is available from RHEL 8.3. Customization of system-wide cryptographic policies is available from RHEL 8.2. Procedure Apply your policy adjustments to the DEFAULT system-wide cryptographic policy level: To make your cryptographic settings effective for already running services and applications, restart the system: Additional resources Custom Policies section in the update-crypto-policies(8) man page on your system Crypto Policy Definition Format section in the crypto-policies(7) man page on your system How to customize crypto policies in RHEL Red Hat blog article. 3.8. Creating and setting a custom system-wide cryptographic policy For specific scenarios, you can customize the system-wide cryptographic policy by creating and using a complete policy file. Note Customization of system-wide cryptographic policies is available from RHEL 8.2. Procedure Create a policy file for your customizations: Alternatively, start by copying one of the four predefined policy levels: Edit the file with your custom cryptographic policy in a text editor of your choice to fit your requirements, for example: Switch the system-wide cryptographic policy to your custom level: To make your cryptographic settings effective for already running services and applications, restart the system: Additional resources Custom Policies section in the update-crypto-policies(8) man page and the Crypto Policy Definition Format section in the crypto-policies(7) man page on your system How to customize crypto policies in RHEL Red Hat blog article 3.9. Enhancing security with the FUTURE cryptographic policy using the crypto_policies RHEL system role You can use the crypto_policies RHEL system role to configure the FUTURE policy on your managed nodes. This policy helps to achieve for example: Future-proofing against emerging threats: anticipates advancements in computational power. Enhanced security: stronger encryption standards require longer key lengths and more secure algorithms. Compliance with high-security standards: for example in healthcare, telco, and finance the data sensitivity is high, and availability of strong cryptography is critical. Typically, FUTURE is suitable for environments handling highly sensitive data, preparing for future regulations, or adopting long-term security strategies. Warning Legacy systems or software does not have to support the more modern and stricter algorithms and protocols enforced by the FUTURE policy. For example, older systems might not support TLS 1.3 or larger key sizes. This could lead to compatibility problems. Also, using strong algorithms usually increases the computational workload, which could negatively affect your system performance. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true The settings specified in the example playbook include the following: crypto_policies_policy: FUTURE Configures the required cryptographic policy ( FUTURE ) on the managed node. It can be either the base policy or a base policy with some sub-policies. The specified base policy and sub-policies have to be available on the managed node. The default value is null . It means that the configuration is not changed and the crypto_policies RHEL system role will only collect the Ansible facts. crypto_policies_reboot_ok: true Causes the system to reboot after the cryptographic policy change to make sure all of the services and applications will read the new configuration files. The default value is false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Warning Because the FIPS:OSPP system-wide subpolicy contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification, the system is less interoperable after you set it. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP , communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate. Note that your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See the Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides hosted at the National Information Assurance Partnership (NIAP) website. Verification On the control node, create another playbook named, for example, verify_playbook.yml : --- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active The settings specified in the example playbook include the following: crypto_policies_active An exported Ansible fact that contains the currently active policy name in the format as accepted by the crypto_policies_policy variable. Validate the playbook syntax: Run the playbook: The crypto_policies_active variable shows the active policy on the managed node. Additional resources /usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file /usr/share/doc/rhel-system-roles/crypto_policies/ directory update-crypto-policies(8) and crypto-policies(7) manual pages 3.10. Additional resources System-wide crypto policies in RHEL 8 and Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase articles
[ "update-crypto-policies --show DEFAULT", "update-crypto-policies --set <POLICY> <POLICY>", "reboot", "update-crypto-policies --show <POLICY>", "update-crypto-policies --set LEGACY Setting system policy to LEGACY", "wget --secure-protocol= TLSv1_1 --ciphers=\" SECURE128 \" https://example.com", "curl https://example.com --ciphers '@SECLEVEL=0:DES-CBC3-SHA:RSA-DES-CBC3-SHA'", "cd /etc/crypto-policies/policies/modules/", "touch MYCRYPTO-1 .pmod touch SCOPES-AND-WILDCARDS .pmod", "vi MYCRYPTO-1 .pmod", "min_rsa_size = 3072 hash = SHA2-384 SHA2-512 SHA3-384 SHA3-512", "vi SCOPES-AND-WILDCARDS .pmod", "Disable the AES-128 cipher, all modes cipher = -AES-128-* Disable CHACHA20-POLY1305 for the TLS protocol (OpenSSL, GnuTLS, NSS, and OpenJDK) cipher@TLS = -CHACHA20-POLY1305 Allow using the FFDHE-1024 group with the SSH protocol (libssh and OpenSSH) group@SSH = FFDHE-1024+ Disable all CBC mode ciphers for the SSH protocol (libssh and OpenSSH) cipher@SSH = -*-CBC Allow the AES-256-CBC cipher in applications using libssh cipher@libssh = AES-256-CBC+", "update-crypto-policies --set DEFAULT: MYCRYPTO-1 : SCOPES-AND-WILDCARDS", "reboot", "cat /etc/crypto-policies/state/CURRENT.pol | grep rsa_size min_rsa_size = 3072", "update-crypto-policies --set DEFAULT:NO-SHA1", "reboot", "cd /etc/crypto-policies/policies/ touch MYPOLICY .pol", "cp /usr/share/crypto-policies/policies/ DEFAULT .pol /etc/crypto-policies/policies/ MYPOLICY .pol", "vi /etc/crypto-policies/policies/ MYPOLICY .pol", "update-crypto-policies --set MYPOLICY", "reboot", "--- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active", "ansible-playbook --syntax-check ~/verify_playbook.yml", "ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { \"crypto_policies_active\": \"FUTURE\" }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening
Chapter 2. Installer-provisioned infrastructure
Chapter 2. Installer-provisioned infrastructure 2.1. vSphere installation requirements Before you begin an installation using installer-provisioned infrastructure, be sure that your vSphere environment meets the following installation requirements. 2.1.1. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Both of these releases support Container Storage Interface (CSI) migration, which is enabled by default on OpenShift Container Platform 4.14. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following tables: Table 2.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Table 2.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . CPU micro-architecture x86-64-v2 or higher OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation . You can verify compatibility by following the procedures outlined in this KCS article . Important To ensure the best performance conditions for your cluster workloads that operate on Oracle(R) Cloud Infrastructure (OCI) and on the Oracle(R) Cloud VMware Solution (OCVS) service, ensure volume performance units (VPUs) for your block volume are sized for your workloads. The following list provides some guidance in selecting the VPUs needed for specific performance needs: Test or proof of concept environment: 100 GB, and 20 to 30 VPUs. Base-production environment: 500 GB, and 60 VPUs. Heavy-use production environment: More than 500 GB, and 100 or more VPUs. Consider allocating additional VPUs to give enough capacity for updates and scaling activities. See Block Volume Performance Levels (Oracle documentation) . 2.1.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 2.1.3. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 2.1.4. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 2.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 2.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 2.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses For a network that uses DHCP, an installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Static IP addresses for vSphere nodes You can provision bootstrap, control plane, and compute nodes to be configured with static IP addresses in environments where Dynamic Host Configuration Protocol (DHCP) does not exist. To configure this environment, you must provide values to the platform.vsphere.hosts.role parameter in the install-config.yaml file. Important Static IP addresses for vSphere nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By default, the installation program is configured to use the DHCP for the network, but this network has limited configurable capabilities. After you define one or more machine pools in your install-config.yaml file, you can define network definitions for nodes on your network. Ensure that the number of network definitions matches the number of machine pools that you configured for your cluster. Example network configuration that specifies different roles # ... platform: vsphere: hosts: - role: bootstrap 1 networkDevice: ipAddrs: - 192.168.204.10/24 2 gateway: 192.168.204.1 3 nameservers: 4 - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.11/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.12/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.13/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: compute networkDevice: ipAddrs: - 192.168.204.14/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 # ... 1 Valid network definition values include bootstrap , control-plane , and compute . You must list at least one bootstrap network definition in your install-config.yaml configuration file. 2 Lists IPv4, IPv6, or both IP addresses that the installation program passes to the network interface. The machine API controller assigns all configured IP addresses to the default network interface. 3 The default gateway for the network interface. 4 Lists up to 3 DNS nameservers. Important To enable the Technology Preview feature of static IP addresses for vSphere nodes for your cluster, you must include featureSet:TechPreviewNoUpgrade as the initial entry in the install-config.yaml file. After you deployed your cluster to run nodes with static IP addresses, you can scale a machine to use one of these static IP addresses. Additionally, you can use a machine set to configure a machine to use one of the configured static IP addresses. Additional resources Scaling machines to use static IP addresses Using a machine set to scale machines with configured static IP addresses 2.2. Preparing to install a cluster using installer-provisioned infrastructure You prepare to install an OpenShift Container Platform cluster on vSphere by completing the following steps: Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. Adding your vCenter's trusted root CA certificates to your system trust. 2.2.1. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.2.2. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.2.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.2.4. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 2.3. Installing a cluster on vSphere In OpenShift Container Platform version 4.14, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.3.1. Prerequisites You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3.3. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Important Some VMware vCenter Single Sign-On (SSO) environments with Active Directory (AD) integration might primarily require you to use the traditional login method, which requires the <domain>\ construct. To ensure that vCenter account permission checks complete properly, consider using the User Principal Name (UPN) login method, such as <username>@<fully_qualified_domainname> . Select the data center in your vCenter instance to connect to. Select the default vCenter datastore to use. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Paste the pull secret from Red Hat OpenShift Cluster Manager . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.3.4. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.3.5. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 2.3.5.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.3.5.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.3.5.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.3.5.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 2.3.6. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.3.7. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 2.4. Installing a cluster on vSphere with customizations In OpenShift Container Platform version 4.14, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.4.1. Prerequisites You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.4.3. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 2.4.4. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Warning You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. You cannot specify more than one datastore path. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on vSphere". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters 2.4.4.1. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 7 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 5 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 6 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 7 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 8 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 9 The vSphere disk provisioning method. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 2.4.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.4.4.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 2.4.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.4.6. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.4.7. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 2.4.7.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.4.7.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.4.7.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.4.7.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 2.4.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.4.9. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 2.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 2.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 2.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 2.4.9.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 2.4.10. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 2.5. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.14, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.5.1. Prerequisites You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, confirm with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.5.3. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 2.5.4. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Warning You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. You cannot specify more than one datastore path. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters 2.5.4.1. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 8 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 9 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 6 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 7 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 8 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 9 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 10 The vSphere disk provisioning method. 5 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 2.5.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.5.4.3. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: vsphere: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 2.5.4.4. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 2.5.5. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 2.5.6. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 2.5.6.1. Specifying multiple subnets for your network Before you install an OpenShift Container Platform cluster on a vSphere host, you can specify multiple subnets for a networking implementation so that the vSphere cloud controller manager (CCM) can select the appropriate subnet for a given networking situation. vSphere can use the subnet for managing pods and services on your cluster. For this configuration, you must specify internal and external Classless Inter-Domain Routing (CIDR) implementations in the vSphere CCM configuration. Each CIDR implementation lists an IP address range that the CCM uses to decide what subnets interact with traffic from internal and external networks. Important Failure to configure internal and external CIDR implementations in the vSphere CCM configuration can cause the vSphere CCM to select the wrong subnet. This situation causes the following error: This configuration can cause new nodes that associate with a MachineSet object with a single subnet to become unusable as each new node receives the node.cloudprovider.kubernetes.io/uninitialized taint. These situations can cause communication issues with the Kubernetes API server that can cause installation of the cluster to fail. Prerequisites You created Kubernetes manifest files for your OpenShift Container Platform cluster. Procedure From the directory where you store your OpenShift Container Platform cluster manifest files, open the manifests/cluster-infrastructure-02-config.yml manifest file. Add a nodeNetworking object to the file and specify internal and external network subnet CIDR implementations for the object. Tip For most networking situations, consider setting the standard multiple-subnet configuration. This configuration requires that you set the same IP address ranges in the nodeNetworking.internal.networkSubnetCidr and nodeNetworking.external.networkSubnetCidr parameters. Example of a configured cluster-infrastructure-02-config.yml manifest file apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain ... nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> # ... Additional resources Cluster Network Operator configuration .spec.platformSpec.vsphere.nodeNetworking 2.5.7. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.5.7.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.7. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.8. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. You can change this value by migrating from OpenShift SDN to OVN-Kubernetes. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 2.9. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.10. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.11. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.12. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.15. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 2.16. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 2.17. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.5.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.5.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.5.10. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 2.5.10.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.5.10.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.5.10.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.5.10.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 2.5.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.5.12. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 2.4. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 2.5. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 2.6. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 2.5.12.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 2.5.13. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Note You can scale the remote workers by creating a worker machineset in a separate subnet. Important When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 2.5.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 2.6. Installing a cluster on vSphere in a restricted network In OpenShift Container Platform 4.14, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.6.1. Prerequisites You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide the ReadWriteMany access mode. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 2.6.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 2.6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 2.6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 2.6.4. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network VMware vSphere environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.14 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image. Upload the image you downloaded to a location that is accessible from the bastion server. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 2.6.5. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 2.6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Warning You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. You cannot specify more than one datastore path. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.vsphere.clusterOSImage to the image location or name. For example: platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters 2.6.6.1. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 7 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2> 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 5 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 6 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 7 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 8 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 9 The vSphere disk provisioning method. 10 The location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that is accessible from the bastion server. 11 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 12 Provide the contents of the certificate file that you used for your mirror registry. 13 Provide the imageContentSources section from the output of the command to mirror the repository. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 2.6.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.6.6.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 2.6.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.6.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.6.9. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 2.6.10. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 2.6.10.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.6.10.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.6.10.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.6.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.6.12. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 2.7. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 2.8. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 2.9. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 2.6.12.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 2.6.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster . Set up your registry and configure registry storage .
[ "platform: vsphere: hosts: - role: bootstrap 1 networkDevice: ipAddrs: - 192.168.204.10/24 2 gateway: 192.168.204.1 3 nameservers: 4 - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.11/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.12/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.13/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: compute networkDevice: ipAddrs: - 192.168.204.14/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 8 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 9 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112", "platform: vsphere: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.", "apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "cd ~/clusterconfigs", "cd manifests", "touch cluster-network-avoid-workers-99-config.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"", "sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_vsphere/installer-provisioned-infrastructure
Chapter 4. Configuring Ansible automation controller on OpenShift Container Platform
Chapter 4. Configuring Ansible automation controller on OpenShift Container Platform During a Kubernetes upgrade, automation controller must be running. 4.1. Minimizing downtime during OpenShift Container Platform upgrade Make the following configuration changes in automation controller to minimize downtime during the upgrade. Prerequisites Ansible Automation Platform 2.4 or later Ansible automation controller 4.4 or later OpenShift Container Platform: Later than 4.10.42 Later than 4.11.16 Later than 4.12.0 High availability (HA) deployment of Postgres Multiple worker node that automation controller pods can be scheduled on Procedure Enable RECEPTOR_KUBE_SUPPORT_RECONNECT in AutomationController specification: apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: ... spec: ... ee_extra_env: | - name: RECEPTOR_KUBE_SUPPORT_RECONNECT value: enabled ``` Enable the graceful termination feature in AutomationController specification: termination_grace_period_seconds: <time to wait for job to finish> Configure podAntiAffinity for web and task the pod to spread out the deployment in AutomationController specification: task_affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - awx-task topologyKey: topology.kubernetes.io/zone weight: 100 web_affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - awx-web topologyKey: topology.kubernetes.io/zone weight: 100 Configure PodDisruptionBudget in OpenShift Container Platform: --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-job-pods spec: maxUnavailable: 0 selector: matchExpressions: - key: ansible-awx-job-id operator: Exists --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-web-pods spec: minAvailable: 1 selector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - <automationcontroller_instance_name>-web --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-task-pods spec: minAvailable: 1 selector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - <automationcontroller_instance_name>-task
[ "apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: spec: ee_extra_env: | - name: RECEPTOR_KUBE_SUPPORT_RECONNECT value: enabled ```", "termination_grace_period_seconds: <time to wait for job to finish>", "task_affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - awx-task topologyKey: topology.kubernetes.io/zone weight: 100 web_affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - awx-web topologyKey: topology.kubernetes.io/zone weight: 100", "--- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-job-pods spec: maxUnavailable: 0 selector: matchExpressions: - key: ansible-awx-job-id operator: Exists --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-web-pods spec: minAvailable: 1 selector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - <automationcontroller_instance_name>-web --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-task-pods spec: minAvailable: 1 selector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - <automationcontroller_instance_name>-task" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/performance_considerations_for_operator_environments/assembly-configure-controller-ocp
Security APIs
Security APIs OpenShift Container Platform 4.16 Reference guide for security APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_apis/index
Chapter 3. User tasks
Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.15 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.2. Installing Operators in your namespace If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner. 3.2.1. Prerequisites A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details. 3.2.2. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 3.2.3. Installing from OperatorHub by using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page, configure your Operator installation: If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install. Note The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel. Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. For clusters on cloud providers with token authentication enabled: If the cluster uses AWS STS ( STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role's ARN, follow the procedure described in Preparing AWS account . If the cluster uses Microsoft Entra Workload ID ( Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate field. For Update approval , select either the Automatic or Manual approval strategy. Important If the web console shows that the cluster uses AWS STS or Microsoft Entra Workload ID, you must set Update approval to Manual . Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster: If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. Verification After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to Succeeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. When the Operator is installed, the metadata indicates which channel and version are installed. Note The Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context. 3.2.4. Installing from OperatorHub by using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. For SingleNamespace install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. Tip In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of OperatorGroup and Subscription objects automatically when choosing SingleNamespace mode. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. You have installed the OpenShift CLI ( oc ). Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example 3.1. Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m # ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m # ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m # ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace Example 3.2. Example output # ... Kind: PackageManifest # ... Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces # ... Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 # ... Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4 1 Indicates which install modes are supported. 2 3 Example channel names. 4 The channel selected by default if one is not specified. Tip You can print an Operator's version and channel information in YAML format by running the following command: USD oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog: USD oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml Important If you do not specify the Operator's catalog, running the oc get packagemanifest and oc describe packagemanifest commands might return a package from an unexpected catalog if the following conditions are met: Multiple catalogs are installed in the same namespace. The catalogs contain the same Operators or Operators with the same name. If the Operator you intend to install supports the AllNamespaces install mode, and you choose to use this mode, skip this step, because the openshift-operators namespace already has an appropriate Operator group in place by default, called global-operators . If the Operator you intend to install supports the SingleNamespace install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps: Important You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml , for SingleNamespace install mode: Example OperatorGroup object for SingleNamespace install mode apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2 1 2 For SingleNamespace install mode, use the same <namespace> value for both the metadata.namespace and spec.targetNamespaces fields. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object to subscribe a namespace to an Operator: Create a YAML file for the Subscription object, for example subscription.yaml : Note If you want to subscribe to a specific version of an Operator, set the startingCSV field to the desired version and set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For details, see the following "Example Subscription object with a specific starting Operator version". Example 3.3. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. For SingleNamespace install mode usage, specify the relevant single namespace. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of environment variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate environment variables in the container. 8 The volumes parameter defines a list of volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Example 3.4. Example Subscription object with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. For clusters on cloud providers with token authentication enabled, configure your Subscription object by following these steps: Ensure the Subscription object is set to manual update approvals: kind: Subscription # ... spec: installPlanApproval: Manual 1 1 Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. Include the relevant cloud provider-specific fields in the Subscription object's config section: If the cluster is in AWS STS mode, include the following fields: kind: Subscription # ... spec: config: env: - name: ROLEARN value: "<role_arn>" 1 1 Include the role ARN details. If the cluster is in Microsoft Entra Workload ID mode, include the following fields: kind: Subscription # ... spec: config: env: - name: CLIENTID value: "<client_id>" 1 - name: TENANTID value: "<tenant_id>" 2 - name: SUBSCRIPTIONID value: "<subscription_id>" 3 1 Include the client ID. 2 Include the tenant ID. 3 Include the subscription ID. Create the Subscription object by running the following command: USD oc apply -f subscription.yaml If you set the installPlanApproval field to Manual , manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update". At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verification Check the status of the Subscription object for your installed Operator by running the following command: USD oc describe subscription <subscription_name> -n <namespace> If you created an Operator group for SingleNamespace install mode, check the status of the OperatorGroup object by running the following command: USD oc describe operatorgroup <operatorgroup_name> -n <namespace> Additional resources Operator groups Channel names Additional resources Manually approving a pending Operator update
[ "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operators/user-tasks
Chapter 8. Securing Kafka
Chapter 8. Securing Kafka A secure deployment of Streams for Apache Kafka might encompass one or more of the following security measures: Encryption for data exchange Authentication to prove identity Authorization to allow or decline actions executed by users Running Streams for Apache Kafka on FIPS-enabled OpenShift clusters to ensure data security and system interoperability 8.1. Encryption Streams for Apache Kafka supports Transport Layer Security (TLS), a protocol for encrypted communication. Communication is always encrypted for communication between: Kafka brokers ZooKeeper nodes Kafka brokers and ZooKeeper nodes Operators and Kafka brokers Operators and ZooKeeper nodes Kafka Exporter You can also configure TLS encryption between Kafka brokers and clients. TLS is specified for external clients when configuring an external listener for the Kafka broker. Streams for Apache Kafka components and Kafka clients use digital certificates for encryption. The Cluster Operator sets up certificates to enable encryption within the Kafka cluster. You can provide your own server certificates, referred to as Kafka listener certificates , for communication between Kafka clients and Kafka brokers, and inter-cluster communication. Streams for Apache Kafka uses Secrets to store the certificates and private keys required for mTLS in PEM and PKCS #12 format. A TLS CA (certificate authority) issues certificates to authenticate the identity of a component. Streams for Apache Kafka verifies the certificates for the components against the CA certificate. Streams for Apache Kafka components are verified against the cluster CA Kafka clients are verified against the clients CA 8.2. Authentication Kafka listeners use authentication to ensure a secure client connection to the Kafka cluster. Supported authentication mechanisms: mTLS authentication (on listeners with TLS-enabled encryption) SASL SCRAM-SHA-512 OAuth 2.0 token based authentication Custom authentication The User Operator manages user credentials for mTLS and SCRAM authentication, but not OAuth 2.0. For example, through the User Operator you can create a user representing a client that requires access to the Kafka cluster, and specify tls as the authentication type. Using OAuth 2.0 token-based authentication, application clients can access Kafka brokers without exposing account credentials. An authorization server handles the granting of access and inquiries about access. Custom authentication allows for any type of Kafka-supported authentication. It can provide more flexibility, but also adds complexity. 8.3. Authorization Kafka clusters use authorization to control the operations that are permitted on Kafka brokers by specific clients or users. If applied to a Kafka cluster, authorization is enabled for all listeners used for client connection. If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints implemented through authorization mechanisms. Supported authorization mechanisms: Simple authorization OAuth 2.0 authorization (if you are using OAuth 2.0 token-based authentication) Open Policy Agent (OPA) authorization Custom authorization Simple authorization uses the AclAuthorizer and StandardAuthorizer Kafka plugins, which are responsible for managing Access Control Lists (ACLs) that specify user access to various resources. For custom authorization, you configure your own Authorizer plugin to enforce ACL rules. OAuth 2.0 and OPA provide policy-based control from an authorization server. Security policies and permissions used to grant access to resources on Kafka brokers are defined in the authorization server. URLs are used to connect to the authorization server and verify that an operation requested by a client or user is allowed or denied. Users and clients are matched against the policies created in the authorization server that permit access to perform specific actions on Kafka brokers. 8.4. Federal Information Processing Standards (FIPS) Federal Information Processing Standards (FIPS) are a set of security standards established by the US government to ensure the confidentiality, integrity, and availability of sensitive data and information that is processed or transmitted by information systems. The OpenJDK used in Streams for Apache Kafka container images automatically enables FIPS mode when running on a FIPS-enabled OpenShift cluster. Note If you don't want the FIPS mode enabled in the Java OpenJDK, you can disable it in the deployment configuration of the Cluster Operator using the FIPS_MODE environment variable. For more information about the NIST validation program and validated modules, see Cryptographic Module Validation Program on the NIST website. Note Compatibility with the technology previews of Streams for Apache Kafka Proxy and Streams for Apache Kafka Console has not been tested regarding FIPS support. While they are expected to function properly, we cannot guarantee full support at this time.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_on_openshift_overview/security-overview_str
Chapter 11. baremetal
Chapter 11. baremetal This chapter describes the commands under the baremetal command. 11.1. baremetal allocation create Create a new baremetal allocation. Usage: Table 11.1. Command arguments Value Summary -h, --help Show this help message and exit --resource-class RESOURCE_CLASS Resource class to request. --trait TRAITS A trait to request. can be specified multiple times. --candidate-node CANDIDATE_NODES A candidate node for this allocation. can be specified multiple times. If at least one is specified, only the provided candidate nodes are considered for the allocation. --name NAME Unique name of the allocation. --uuid UUID Uuid of the allocation. --owner OWNER Owner of the allocation. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --wait [<time-out>] Wait for the new allocation to become active. an error is returned if allocation fails and --wait is used. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --node NODE Backfill this allocation from the provided node that has already been deployed. Bypasses the normal allocation process. Table 11.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.2. baremetal allocation delete Unregister baremetal allocation(s). Usage: Table 11.6. Positional arguments Value Summary <allocation> Allocations(s) to delete (name or uuid). Table 11.7. Command arguments Value Summary -h, --help Show this help message and exit 11.3. baremetal allocation list List baremetal allocations. Usage: Table 11.8. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of allocations to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <allocation> Allocation uuid (for example, of the last allocation in the list from a request). Returns the list of allocations after this UUID. --sort <key>[:<direction>] Sort output by specified allocation fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --node <node> Only list allocations of this node (name or uuid). --resource-class <resource_class> Only list allocations with this resource class. --state <state> Only list allocations in this state. --owner <owner> Only list allocations with this owner. --long Show detailed information about the allocations. --fields <field> [<field> ... ] One or more allocation fields. only these fields will be fetched from the server. Can not be used when -- long is specified. Table 11.9. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.10. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.4. baremetal allocation set Set baremetal allocation properties. Usage: Table 11.13. Positional arguments Value Summary <allocation> Name or uuid of the allocation Table 11.14. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set the name of the allocation --extra <key=value> Extra property to set on this allocation (repeat option to set multiple extra properties) 11.5. baremetal allocation show Show baremetal allocation details. Usage: Table 11.15. Positional arguments Value Summary <id> Uuid or name of the allocation Table 11.16. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more allocation fields. only these fields will be fetched from the server. Table 11.17. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.19. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.20. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.6. baremetal allocation unset Unset baremetal allocation properties. Usage: Table 11.21. Positional arguments Value Summary <allocation> Name or uuid of the allocation Table 11.22. Command arguments Value Summary -h, --help Show this help message and exit --name Unset the name of the allocation --extra <key> Extra property to unset on this baremetal allocation (repeat option to unset multiple extra property). 11.7. baremetal chassis create Create a new chassis. Usage: Table 11.23. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description for the chassis --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --uuid <uuid> Unique uuid of the chassis Table 11.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.8. baremetal chassis delete Delete a chassis. Usage: Table 11.28. Positional arguments Value Summary <chassis> Uuids of chassis to delete Table 11.29. Command arguments Value Summary -h, --help Show this help message and exit 11.9. baremetal chassis list List the chassis. Usage: Table 11.30. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more chassis fields. only these fields will be fetched from the server. Cannot be used when --long is specified. --limit <limit> Maximum number of chassis to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --long Show detailed information about the chassis --marker <chassis> Chassis uuid (for example, of the last chassis in the list from a request). Returns the list of chassis after this UUID. --sort <key>[:<direction>] Sort output by specified chassis fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. Table 11.31. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.32. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.33. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.10. baremetal chassis set Set chassis properties. Usage: Table 11.35. Positional arguments Value Summary <chassis> Uuid of the chassis Table 11.36. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Set the description of the chassis --extra <key=value> Extra to set on this chassis (repeat option to set multiple extras) 11.11. baremetal chassis show Show chassis details. Usage: Table 11.37. Positional arguments Value Summary <chassis> Uuid of the chassis Table 11.38. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more chassis fields. only these fields will be fetched from the server. Table 11.39. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.40. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.41. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.42. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.12. baremetal chassis unset Unset chassis properties. Usage: Table 11.43. Positional arguments Value Summary <chassis> Uuid of the chassis Table 11.44. Command arguments Value Summary -h, --help Show this help message and exit --description Clear the chassis description --extra <key> Extra to unset on this chassis (repeat option to unset multiple extras) 11.13. baremetal conductor list List baremetal conductors Usage: Table 11.45. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of conductors to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <conductor> Hostname of the conductor (for example, of the last conductor in the list from a request). Returns the list of conductors after this conductor. --sort <key>[:<direction>] Sort output by specified conductor fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about the conductors. --fields <field> [<field> ... ] One or more conductor fields. only these fields will be fetched from the server. Can not be used when -- long is specified. Table 11.46. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.47. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.49. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.14. baremetal conductor show Show baremetal conductor details Usage: Table 11.50. Positional arguments Value Summary <conductor> Hostname of the conductor Table 11.51. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more conductor fields. only these fields will be fetched from the server. Table 11.52. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.53. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.54. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.55. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.15. baremetal create Create resources from files Usage: Table 11.56. Positional arguments Value Summary <file> File (.yaml or .json) containing descriptions of the resources to create. Can be specified multiple times. Table 11.57. Command arguments Value Summary -h, --help Show this help message and exit 11.16. baremetal deploy template create Create a new deploy template Usage: Table 11.58. Positional arguments Value Summary <name> Unique name for this deploy template. must be a valid trait name Table 11.59. Command arguments Value Summary -h, --help Show this help message and exit --uuid <uuid> Uuid of the deploy template. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --steps <steps> The deploy steps. may be the path to a yaml file containing the deploy steps; OR - , with the deploy steps being read from standard input; OR a JSON string. The value should be a list of deploy-step dictionaries; each dictionary should have keys interface , step , args and priority . Table 11.60. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.61. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.62. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.63. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.17. baremetal deploy template delete Delete deploy template(s). Usage: Table 11.64. Positional arguments Value Summary <template> Name(s) or uuid(s) of the deploy template(s) to delete. Table 11.65. Command arguments Value Summary -h, --help Show this help message and exit 11.18. baremetal deploy template list List baremetal deploy templates. Usage: Table 11.66. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of deploy templates to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <template> Deploytemplate uuid (for example, of the last deploy template in the list from a request). Returns the list of deploy templates after this UUID. --sort <key>[:<direction>] Sort output by specified deploy template fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about deploy templates. --fields <field> [<field> ... ] One or more deploy template fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 11.67. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.68. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.69. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.70. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.19. baremetal deploy template set Set baremetal deploy template properties. Usage: Table 11.71. Positional arguments Value Summary <template> Name or uuid of the deploy template Table 11.72. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set unique name of the deploy template. must be a valid trait name. --steps <steps> The deploy steps. may be the path to a yaml file containing the deploy steps; OR - , with the deploy steps being read from standard input; OR a JSON string. The value should be a list of deploy-step dictionaries; each dictionary should have keys interface , step , args and priority . --extra <key=value> Extra to set on this baremetal deploy template (repeat option to set multiple extras). 11.20. baremetal deploy template show Show baremetal deploy template details. Usage: Table 11.73. Positional arguments Value Summary <template> Name or uuid of the deploy template. Table 11.74. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more deploy template fields. only these fields will be fetched from the server. Table 11.75. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.76. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.77. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.78. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.21. baremetal deploy template unset Unset baremetal deploy template properties. Usage: Table 11.79. Positional arguments Value Summary <template> Name or uuid of the deploy template Table 11.80. Command arguments Value Summary -h, --help Show this help message and exit --extra <key> Extra to unset on this baremetal deploy template (repeat option to unset multiple extras). 11.22. baremetal driver list List the enabled drivers. Usage: Table 11.81. Command arguments Value Summary -h, --help Show this help message and exit --type <type> Type of driver ("classic" or "dynamic"). the default is to list all of them. --long Show detailed information about the drivers. Table 11.82. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.83. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.84. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.85. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.23. baremetal driver passthru call Call a vendor passthru method for a driver. Usage: Table 11.86. Positional arguments Value Summary <driver> Name of the driver. <method> Vendor passthru method to be called. Table 11.87. Command arguments Value Summary -h, --help Show this help message and exit --arg <key=value> Argument to pass to the passthru method (repeat option to specify multiple arguments). --http-method <http-method> The http method to use in the passthru request. one of DELETE, GET, PATCH, POST, PUT. Defaults to POST . Table 11.88. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.89. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.90. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.91. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.24. baremetal driver passthru list List available vendor passthru methods for a driver. Usage: Table 11.92. Positional arguments Value Summary <driver> Name of the driver. Table 11.93. Command arguments Value Summary -h, --help Show this help message and exit Table 11.94. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.95. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.96. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.97. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.25. baremetal driver property list List the driver properties. Usage: Table 11.98. Positional arguments Value Summary <driver> Name of the driver. Table 11.99. Command arguments Value Summary -h, --help Show this help message and exit Table 11.100. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.101. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.102. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.103. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.26. baremetal driver raid property list List a driver's RAID logical disk properties. Usage: Table 11.104. Positional arguments Value Summary <driver> Name of the driver. Table 11.105. Command arguments Value Summary -h, --help Show this help message and exit Table 11.106. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.107. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.108. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.27. baremetal driver show Show information about a driver. Usage: Table 11.110. Positional arguments Value Summary <driver> Name of the driver. Table 11.111. Command arguments Value Summary -h, --help Show this help message and exit Table 11.112. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.113. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.114. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.115. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.28. baremetal introspection abort Abort running introspection for node. Usage: Table 11.116. Positional arguments Value Summary node Baremetal node uuid or name Table 11.117. Command arguments Value Summary -h, --help Show this help message and exit 11.29. baremetal introspection data save Save or display raw introspection data. Usage: Table 11.118. Positional arguments Value Summary node Baremetal node uuid or name Table 11.119. Command arguments Value Summary -h, --help Show this help message and exit --file <filename> Downloaded introspection data filename (default: stdout) --unprocessed Download the unprocessed data 11.30. baremetal introspection interface list List interface data including attached switch port information. Usage: Table 11.120. Positional arguments Value Summary node_ident Baremetal node uuid or name Table 11.121. Command arguments Value Summary -h, --help Show this help message and exit --vlan VLAN List only interfaces configured for this vlan id, can be repeated --long Show detailed information about interfaces. --fields <field> [<field> ... ] Display one or more fields. can not be used when -- long is specified Table 11.122. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.123. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.124. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.125. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.31. baremetal introspection interface show Show interface data including attached switch port information. Usage: Table 11.126. Positional arguments Value Summary node_ident Baremetal node uuid or name interface Interface name Table 11.127. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] Display one or more fields. Table 11.128. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.129. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.130. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.131. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.32. baremetal introspection list List introspection statuses Usage: Table 11.132. Command arguments Value Summary -h, --help Show this help message and exit --marker MARKER Uuid of the last item on the page --limit LIMIT The amount of items to return Table 11.133. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.134. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.135. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.136. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.33. baremetal introspection reprocess Reprocess stored introspection data Usage: Table 11.137. Positional arguments Value Summary node Baremetal node uuid or name Table 11.138. Command arguments Value Summary -h, --help Show this help message and exit 11.34. baremetal introspection rule delete Delete an introspection rule. Usage: Table 11.139. Positional arguments Value Summary uuid Rule uuid Table 11.140. Command arguments Value Summary -h, --help Show this help message and exit 11.35. baremetal introspection rule import Import one or several introspection rules from a JSON/YAML file. Usage: Table 11.141. Positional arguments Value Summary file Json or yaml file to import, may contain one or several rules Table 11.142. Command arguments Value Summary -h, --help Show this help message and exit Table 11.143. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.144. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.145. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.146. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.36. baremetal introspection rule list List all introspection rules. Usage: Table 11.147. Command arguments Value Summary -h, --help Show this help message and exit Table 11.148. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.149. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.150. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.151. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.37. baremetal introspection rule purge Drop all introspection rules. Usage: Table 11.152. Command arguments Value Summary -h, --help Show this help message and exit 11.38. baremetal introspection rule show Show an introspection rule. Usage: Table 11.153. Positional arguments Value Summary uuid Rule uuid Table 11.154. Command arguments Value Summary -h, --help Show this help message and exit Table 11.155. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.156. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.157. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.158. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.39. baremetal introspection start Start the introspection. Usage: Table 11.159. Positional arguments Value Summary node Baremetal node uuid(s) or name(s) Table 11.160. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for introspection to finish; the result will be displayed in the end --check-errors Check if errors occurred during the introspection; if any error occurs only the errors are displayed; can only be used with --wait Table 11.161. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.162. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.163. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.164. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.40. baremetal introspection status Get introspection status. Usage: Table 11.165. Positional arguments Value Summary node Baremetal node uuid or name Table 11.166. Command arguments Value Summary -h, --help Show this help message and exit Table 11.167. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.168. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.169. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.170. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.41. baremetal node abort Set provision state of baremetal node to abort Usage: Table 11.171. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.172. Command arguments Value Summary -h, --help Show this help message and exit 11.42. baremetal node add trait Add traits to a node. Usage: Table 11.173. Positional arguments Value Summary <node> Name or uuid of the node <trait> Trait(s) to add Table 11.174. Command arguments Value Summary -h, --help Show this help message and exit 11.43. baremetal node adopt Set provision state of baremetal node to adopt Usage: Table 11.175. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.176. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, active. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 11.44. baremetal node bios setting list List a node's BIOS settings. Usage: Table 11.177. Positional arguments Value Summary <node> Name or uuid of the node Table 11.178. Command arguments Value Summary -h, --help Show this help message and exit Table 11.179. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.180. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.181. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.182. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.45. baremetal node bios setting show Show a specific BIOS setting for a node. Usage: Table 11.183. Positional arguments Value Summary <node> Name or uuid of the node <setting name> Setting name to show Table 11.184. Command arguments Value Summary -h, --help Show this help message and exit Table 11.185. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.186. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.187. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.188. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.46. baremetal node boot device set Set the boot device for a node Usage: Table 11.189. Positional arguments Value Summary <node> Name or uuid of the node <device> One of bios, cdrom, disk, pxe, safe, wanboot Table 11.190. Command arguments Value Summary -h, --help Show this help message and exit --persistent Make changes persistent for all future boots 11.47. baremetal node boot device show Show the boot device information for a node Usage: Table 11.191. Positional arguments Value Summary <node> Name or uuid of the node Table 11.192. Command arguments Value Summary -h, --help Show this help message and exit --supported Show the supported boot devices Table 11.193. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.194. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.195. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.196. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.48. baremetal node clean Set provision state of baremetal node to clean Usage: Table 11.197. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.198. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, manageable. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --clean-steps <clean-steps> The clean steps. may be the path to a yaml file containing the clean steps; OR - , with the clean steps being read from standard input; OR a JSON string. The value should be a list of clean-step dictionaries; each dictionary should have keys interface and step , and optional key args . 11.49. baremetal node console disable Disable console access for a node Usage: Table 11.199. Positional arguments Value Summary <node> Name or uuid of the node Table 11.200. Command arguments Value Summary -h, --help Show this help message and exit 11.50. baremetal node console enable Enable console access for a node Usage: Table 11.201. Positional arguments Value Summary <node> Name or uuid of the node Table 11.202. Command arguments Value Summary -h, --help Show this help message and exit 11.51. baremetal node console show Show console information for a node Usage: Table 11.203. Positional arguments Value Summary <node> Name or uuid of the node Table 11.204. Command arguments Value Summary -h, --help Show this help message and exit Table 11.205. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.206. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.207. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.208. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.52. baremetal node create Register a new node with the baremetal service Usage: Table 11.209. Command arguments Value Summary -h, --help Show this help message and exit --chassis-uuid <chassis> Uuid of the chassis that this node belongs to. --driver <driver> Driver used to control the node [required]. --driver-info <key=value> Key/value pair used by the driver, such as out-of-band management credentials. Can be specified multiple times. --property <key=value> Key/value pair describing the physical characteristics of the node. This is exported to Nova and used by the scheduler. Can be specified multiple times. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --uuid <uuid> Unique uuid for the node. --name <name> Unique name for the node. --bios-interface <bios_interface> Bios interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --boot-interface <boot_interface> Boot interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --console-interface <console_interface> Console interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --deploy-interface <deploy_interface> Deploy interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --inspect-interface <inspect_interface> Inspect interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --management-interface <management_interface> Management interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --network-data <network data> Json string or a yaml file or - for stdin to read static network configuration for the baremetal node associated with this ironic node. Format of this file should comply with Nova network data metadata (network_data.json). Depending on ironic boot interface capabilities being used, network configuration may or may not been served to the node for offline network configuration. --network-interface <network_interface> Network interface used for switching node to cleaning/provisioning networks. --power-interface <power_interface> Power interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --raid-interface <raid_interface> Raid interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --rescue-interface <rescue_interface> Rescue interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --storage-interface <storage_interface> Storage interface used by the node's driver. --vendor-interface <vendor_interface> Vendor interface used by the node's driver. this is only applicable when the specified --driver is a hardware type. --resource-class <resource_class> Resource class for mapping nodes to nova flavors --conductor-group <conductor_group> Conductor group the node will belong to --automated-clean Enable automated cleaning for the node --no-automated-clean Explicitly disable automated cleaning for the node --owner <owner> Owner of the node. --lessee <lessee> Lessee of the node. --description <description> Description for the node. Table 11.210. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.211. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.212. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.213. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.53. baremetal node delete Unregister baremetal node(s) Usage: Table 11.214. Positional arguments Value Summary <node> Node(s) to delete (name or uuid) Table 11.215. Command arguments Value Summary -h, --help Show this help message and exit 11.54. baremetal node deploy Set provision state of baremetal node to deploy Usage: Table 11.216. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.217. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, active. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --config-drive <config-drive> A gzipped, base64-encoded configuration drive string OR the path to the configuration drive file OR the path to a directory containing the config drive files OR a JSON object to build config drive from. In case it's a directory, a config drive will be generated from it. In case it's a JSON object with optional keys meta_data , user_data and network_data , a config drive will be generated on the server side (see the bare metal API reference for more details). --deploy-steps <deploy-steps> The deploy steps. may be the path to a yaml file containing the deploy steps; OR - , with the deploy steps being read from standard input; OR a JSON string. The value should be a list of deploy-step dictionaries; each dictionary should have keys interface and step , and optional key args . 11.55. baremetal node inject nmi Inject NMI to baremetal node Usage: Table 11.218. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.219. Command arguments Value Summary -h, --help Show this help message and exit 11.56. baremetal node inspect Set provision state of baremetal node to inspect Usage: Table 11.220. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.221. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, manageable. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 11.57. baremetal node list List baremetal nodes Usage: Table 11.222. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of nodes to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <node> Node uuid (for example, of the last node in the list from a request). Returns the list of nodes after this UUID. --sort <key>[:<direction>] Sort output by specified node fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --maintenance Limit list to nodes in maintenance mode --no-maintenance Limit list to nodes not in maintenance mode --retired Limit list to retired nodes. --no-retired Limit list to not retired nodes. --fault <fault> List nodes in specified fault. --associated List only nodes associated with an instance. --unassociated List only nodes not associated with an instance. --provision-state <provision state> List nodes in specified provision state. --driver <driver> Limit list to nodes with driver <driver> --resource-class <resource class> Limit list to nodes with resource class <resource class> --conductor-group <conductor_group> Limit list to nodes with conductor group <conductor group> --conductor <conductor> Limit list to nodes with conductor <conductor> --chassis <chassis UUID> Limit list to nodes of this chassis --owner <owner> Limit list to nodes with owner <owner> --lessee <lessee> Limit list to nodes with lessee <lessee> --description-contains <description_contains> Limit list to nodes with description contains <description_contains> --long Show detailed information about the nodes. --fields <field> [<field> ... ] One or more node fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 11.223. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.224. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.225. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.226. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.58. baremetal node maintenance set Set baremetal node to maintenance mode Usage: Table 11.227. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.228. Command arguments Value Summary -h, --help Show this help message and exit --reason <reason> Reason for setting maintenance mode. 11.59. baremetal node maintenance unset Unset baremetal node from maintenance mode Usage: Table 11.229. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.230. Command arguments Value Summary -h, --help Show this help message and exit 11.60. baremetal node manage Set provision state of baremetal node to manage Usage: Table 11.231. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.232. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, manageable. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 11.61. baremetal node passthru call Call a vendor passthu method for a node Usage: Table 11.233. Positional arguments Value Summary <node> Name or uuid of the node <method> Vendor passthru method to be executed Table 11.234. Command arguments Value Summary -h, --help Show this help message and exit --arg <key=value> Argument to pass to the passthru method (repeat option to specify multiple arguments) --http-method <http-method> The http method to use in the passthru request. one of DELETE, GET, PATCH, POST, PUT. Defaults to POST. 11.62. baremetal node passthru list List vendor passthru methods for a node Usage: Table 11.235. Positional arguments Value Summary <node> Name or uuid of the node Table 11.236. Command arguments Value Summary -h, --help Show this help message and exit Table 11.237. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.238. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.239. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.240. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.63. baremetal node power off Power off a node Usage: Table 11.241. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.242. Command arguments Value Summary -h, --help Show this help message and exit --power-timeout <power-timeout> Timeout (in seconds, positive integer) to wait for the target power state before erroring out. --soft Request graceful power-off. 11.64. baremetal node power on Power on a node Usage: Table 11.243. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.244. Command arguments Value Summary -h, --help Show this help message and exit --power-timeout <power-timeout> Timeout (in seconds, positive integer) to wait for the target power state before erroring out. 11.65. baremetal node provide Set provision state of baremetal node to provide Usage: Table 11.245. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.246. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, available. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 11.66. baremetal node reboot Reboot baremetal node Usage: Table 11.247. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.248. Command arguments Value Summary -h, --help Show this help message and exit --soft Request graceful reboot. --power-timeout <power-timeout> Timeout (in seconds, positive integer) to wait for the target power state before erroring out. 11.67. baremetal node rebuild Set provision state of baremetal node to rebuild Usage: Table 11.249. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.250. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, active. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --config-drive <config-drive> A gzipped, base64-encoded configuration drive string OR the path to the configuration drive file OR the path to a directory containing the config drive files OR a JSON object to build config drive from. In case it's a directory, a config drive will be generated from it. In case it's a JSON object with optional keys meta_data , user_data and network_data , a config drive will be generated on the server side (see the bare metal API reference for more details). --deploy-steps <deploy-steps> The deploy steps in json format. may be the path to a file containing the deploy steps; OR - , with the deploy steps being read from standard input; OR a string. The value should be a list of deploy-step dictionaries; each dictionary should have keys interface , step , priority and optional key args . 11.68. baremetal node remove trait Remove trait(s) from a node. Usage: Table 11.251. Positional arguments Value Summary <node> Name or uuid of the node <trait> Trait(s) to remove Table 11.252. Command arguments Value Summary -h, --help Show this help message and exit --all Remove all traits 11.69. baremetal node rescue Set provision state of baremetal node to rescue Usage: Table 11.253. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.254. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, rescue. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. --rescue-password <rescue-password> The password that will be used to login to the rescue ramdisk. The value should be a non-empty string. 11.70. baremetal node set Set baremetal properties Usage: Table 11.255. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.256. Command arguments Value Summary -h, --help Show this help message and exit --instance-uuid <uuid> Set instance uuid of node to <uuid> --name <name> Set the name of the node --chassis-uuid <chassis UUID> Set the chassis for the node --driver <driver> Set the driver for the node --bios-interface <bios_interface> Set the bios interface for the node --reset-bios-interface Reset the bios interface to its hardware type default --boot-interface <boot_interface> Set the boot interface for the node --reset-boot-interface Reset the boot interface to its hardware type default --console-interface <console_interface> Set the console interface for the node --reset-console-interface Reset the console interface to its hardware type default --deploy-interface <deploy_interface> Set the deploy interface for the node --reset-deploy-interface Reset the deploy interface to its hardware type default --inspect-interface <inspect_interface> Set the inspect interface for the node --reset-inspect-interface Reset the inspect interface to its hardware type default --management-interface <management_interface> Set the management interface for the node --reset-management-interface Reset the management interface to its hardware type default --network-interface <network_interface> Set the network interface for the node --reset-network-interface Reset the network interface to its hardware type default --network-data <network data> Json string or a yaml file or - for stdin to read static network configuration for the baremetal node associated with this ironic node. Format of this file should comply with Nova network data metadata (network_data.json). Depending on ironic boot interface capabilities being used, network configuration may or may not been served to the node for offline network configuration. --power-interface <power_interface> Set the power interface for the node --reset-power-interface Reset the power interface to its hardware type default --raid-interface <raid_interface> Set the raid interface for the node --reset-raid-interface Reset the raid interface to its hardware type default --rescue-interface <rescue_interface> Set the rescue interface for the node --reset-rescue-interface Reset the rescue interface to its hardware type default --storage-interface <storage_interface> Set the storage interface for the node --reset-storage-interface Reset the storage interface to its hardware type default --vendor-interface <vendor_interface> Set the vendor interface for the node --reset-vendor-interface Reset the vendor interface to its hardware type default --reset-interfaces Reset all interfaces not specified explicitly to their default implementations. Only valid with --driver. --resource-class <resource_class> Set the resource class for the node --conductor-group <conductor_group> Set the conductor group for the node --automated-clean Enable automated cleaning for the node --no-automated-clean Explicitly disable automated cleaning for the node --protected Mark the node as protected --protected-reason <protected_reason> Set the reason of marking the node as protected --retired Mark the node as retired --retired-reason <retired_reason> Set the reason of marking the node as retired --target-raid-config <target_raid_config> Set the target raid configuration (json) for the node. This can be one of: 1. a file containing YAML data of the RAID configuration; 2. "-" to read the contents from standard input; or 3. a valid JSON string. --property <key=value> Property to set on this baremetal node (repeat option to set multiple properties) --extra <key=value> Extra to set on this baremetal node (repeat option to set multiple extras) --driver-info <key=value> Driver information to set on this baremetal node (repeat option to set multiple driver infos) --instance-info <key=value> Instance information to set on this baremetal node (repeat option to set multiple instance infos) --owner <owner> Set the owner for the node --lessee <lessee> Set the lessee for the node --description <description> Set the description for the node 11.71. baremetal node show Show baremetal node details Usage: Table 11.257. Positional arguments Value Summary <node> Name or uuid of the node (or instance uuid if --instance is specified) Table 11.258. Command arguments Value Summary -h, --help Show this help message and exit --instance <node> is an instance uuid. --fields <field> [<field> ... ] One or more node fields. only these fields will be fetched from the server. Table 11.259. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.260. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.261. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.262. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.72. baremetal node trait list List a node's traits. Usage: Table 11.263. Positional arguments Value Summary <node> Name or uuid of the node Table 11.264. Command arguments Value Summary -h, --help Show this help message and exit Table 11.265. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.266. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.267. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.268. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.73. baremetal node undeploy Set provision state of baremetal node to deleted Usage: Table 11.269. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.270. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, available. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 11.74. baremetal node unrescue Set provision state of baremetal node to unrescue Usage: Table 11.271. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.272. Command arguments Value Summary -h, --help Show this help message and exit --wait [<time-out>] Wait for a node to reach the desired state, active. Optionally takes a timeout value (in seconds). The default value is 0, meaning it will wait indefinitely. 11.75. baremetal node unset Unset baremetal properties Usage: Table 11.273. Positional arguments Value Summary <node> Name or uuid of the node. Table 11.274. Command arguments Value Summary -h, --help Show this help message and exit --instance-uuid Unset instance uuid on this baremetal node --name Unset the name of the node --resource-class Unset the resource class of the node --target-raid-config Unset the target raid configuration of the node --property <key> Property to unset on this baremetal node (repeat option to unset multiple properties) --extra <key> Extra to unset on this baremetal node (repeat option to unset multiple extras) --driver-info <key> Driver information to unset on this baremetal node (repeat option to unset multiple driver informations) --instance-info <key> Instance information to unset on this baremetal node (repeat option to unset multiple instance informations) --chassis-uuid Unset chassis uuid on this baremetal node --bios-interface Unset bios interface on this baremetal node --boot-interface Unset boot interface on this baremetal node --console-interface Unset console interface on this baremetal node --deploy-interface Unset deploy interface on this baremetal node --inspect-interface Unset inspect interface on this baremetal node --network-data Unset network data on this baremetal port. --management-interface Unset management interface on this baremetal node --network-interface Unset network interface on this baremetal node --power-interface Unset power interface on this baremetal node --raid-interface Unset raid interface on this baremetal node --rescue-interface Unset rescue interface on this baremetal node --storage-interface Unset storage interface on this baremetal node --vendor-interface Unset vendor interface on this baremetal node --conductor-group Unset conductor group for this baremetal node (the default group will be used) --automated-clean Unset automated clean option on this baremetal node (the value from configuration will be used) --protected Unset the protected flag on the node --protected-reason Unset the protected reason (gets unset automatically when protected is unset) --retired Unset the retired flag on the node --retired-reason Unset the retired reason (gets unset automatically when retired is unset) --owner Unset the owner field of the node --lessee Unset the lessee field of the node --description Unset the description field of the node 11.76. baremetal node validate Validate a node's driver interfaces Usage: Table 11.275. Positional arguments Value Summary <node> Name or uuid of the node Table 11.276. Command arguments Value Summary -h, --help Show this help message and exit Table 11.277. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.278. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.279. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.280. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.77. baremetal node vif attach Attach VIF to a given node Usage: Table 11.281. Positional arguments Value Summary <node> Name or uuid of the node <vif-id> Name or uuid of the vif to attach to a node. Table 11.282. Command arguments Value Summary -h, --help Show this help message and exit --port-uuid <port-uuid> Uuid of the baremetal port to attach the vif to. --vif-info <key=value> Record arbitrary key/value metadata. can be specified multiple times. The mandatory id parameter cannot be specified as a key. 11.78. baremetal node vif detach Detach VIF from a given node Usage: Table 11.283. Positional arguments Value Summary <node> Name or uuid of the node <vif-id> Name or uuid of the vif to detach from a node. Table 11.284. Command arguments Value Summary -h, --help Show this help message and exit 11.79. baremetal node vif list Show attached VIFs for a node Usage: Table 11.285. Positional arguments Value Summary <node> Name or uuid of the node Table 11.286. Command arguments Value Summary -h, --help Show this help message and exit Table 11.287. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.288. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.289. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.290. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.80. baremetal port create Create a new port Usage: Table 11.291. Positional arguments Value Summary <address> Mac address for this port. Table 11.292. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this port belongs to. --uuid <uuid> Uuid of the port. --extra <key=value> Record arbitrary key/value metadata. argument can be specified multiple times. --local-link-connection <key=value> Key/value metadata describing local link connection information. Valid keys are switch_info , switch_id , port_id and hostname . The keys switch_id and port_id are required. In case of a Smart NIC port, the required keys are port_id and hostname . Argument can be specified multiple times. -l <key=value> Deprecated. please use --local-link-connection instead. Key/value metadata describing Local link connection information. Valid keys are switch_info , switch_id , and port_id . The keys switch_id and port_id are required. Can be specified multiple times. --pxe-enabled <boolean> Indicates whether this port should be used when pxe booting this Node. --port-group <uuid> Uuid of the port group that this port belongs to. --physical-network <physical network> Name of the physical network to which this port is connected. --is-smartnic Indicates whether this port is a smart nic port Table 11.293. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.294. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.295. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.296. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.81. baremetal port delete Delete port(s). Usage: Table 11.297. Positional arguments Value Summary <port> Uuid(s) of the port(s) to delete. Table 11.298. Command arguments Value Summary -h, --help Show this help message and exit 11.82. baremetal port group create Create a new baremetal port group. Usage: Table 11.299. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this port group belongs to. --address <mac-address> Mac address for this port group. --name NAME Name of the port group. --uuid UUID Uuid of the port group. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. --mode MODE Mode of the port group. for possible values, refer to https://www.kernel.org/doc/Documentation/networking/bo nding.txt. --property <key=value> Key/value property related to this port group's configuration. Can be specified multiple times. --support-standalone-ports Ports that are members of this port group can be used as stand-alone ports. (default) --unsupport-standalone-ports Ports that are members of this port group cannot be used as stand-alone ports. Table 11.300. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.301. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.302. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.303. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.83. baremetal port group delete Unregister baremetal port group(s). Usage: Table 11.304. Positional arguments Value Summary <port group> Port group(s) to delete (name or uuid). Table 11.305. Command arguments Value Summary -h, --help Show this help message and exit 11.84. baremetal port group list List baremetal port groups. Usage: Table 11.306. Command arguments Value Summary -h, --help Show this help message and exit --limit <limit> Maximum number of port groups to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <port group> Port group uuid (for example, of the last port group in the list from a request). Returns the list of port groups after this UUID. --sort <key>[:<direction>] Sort output by specified port group fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --address <mac-address> Only show information for the port group with this mac address. --node <node> Only list port groups of this node (name or uuid). --long Show detailed information about the port groups. --fields <field> [<field> ... ] One or more port group fields. only these fields will be fetched from the server. Can not be used when -- long is specified. Table 11.307. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.308. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.309. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.310. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.85. baremetal port group set Set baremetal port group properties. Usage: Table 11.311. Positional arguments Value Summary <port group> Name or uuid of the port group. Table 11.312. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Update uuid of the node that this port group belongs to. --address <mac-address> Mac address for this port group. --name <name> Name of the port group. --extra <key=value> Extra to set on this baremetal port group (repeat option to set multiple extras). --mode MODE Mode of the port group. for possible values, refer to https://www.kernel.org/doc/Documentation/networking/bo nding.txt. --property <key=value> Key/value property related to this port group's configuration (repeat option to set multiple properties). --support-standalone-ports Ports that are members of this port group can be used as stand-alone ports. --unsupport-standalone-ports Ports that are members of this port group cannot be used as stand-alone ports. 11.86. baremetal port group show Show baremetal port group details. Usage: Table 11.313. Positional arguments Value Summary <id> Uuid or name of the port group (or mac address if --address is specified). Table 11.314. Command arguments Value Summary -h, --help Show this help message and exit --address <id> is the mac address (instead of uuid or name) of the port group. --fields <field> [<field> ... ] One or more port group fields. only these fields will be fetched from the server. Table 11.315. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.316. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.317. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.318. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.87. baremetal port group unset Unset baremetal port group properties. Usage: Table 11.319. Positional arguments Value Summary <port group> Name or uuid of the port group. Table 11.320. Command arguments Value Summary -h, --help Show this help message and exit --name Unset the name of the port group. --address Unset the address of the port group. --extra <key> Extra to unset on this baremetal port group (repeat option to unset multiple extras). --property <key> Property to unset on this baremetal port group (repeat option to unset multiple properties). 11.88. baremetal port list List baremetal ports. Usage: Table 11.321. Command arguments Value Summary -h, --help Show this help message and exit --address <mac-address> Only show information for the port with this mac address. --node <node> Only list ports of this node (name or uuid). --port-group <port group> Only list ports of this port group (name or uuid). --limit <limit> Maximum number of ports to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <port> Port uuid (for example, of the last port in the list from a request). Returns the list of ports after this UUID. --sort <key>[:<direction>] Sort output by specified port fields and directions (asc or desc) (default: asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about ports. --fields <field> [<field> ... ] One or more port fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 11.322. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.323. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.324. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.325. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.89. baremetal port set Set baremetal port properties. Usage: Table 11.326. Positional arguments Value Summary <port> Uuid of the port Table 11.327. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Set uuid of the node that this port belongs to --address <address> Set mac address for this port --extra <key=value> Extra to set on this baremetal port (repeat option to set multiple extras) --port-group <uuid> Set uuid of the port group that this port belongs to. --local-link-connection <key=value> Key/value metadata describing local link connection information. Valid keys are switch_info , switch_id , port_id and hostname . The keys switch_id and port_id are required. In case of a Smart NIC port, the required keys are port_id and hostname . Argument can be specified multiple times. --pxe-enabled Indicates that this port should be used when pxe booting this node (default) --pxe-disabled Indicates that this port should not be used when pxe booting this node --physical-network <physical network> Set the name of the physical network to which this port is connected. --is-smartnic Set port to be smart nic port 11.90. baremetal port show Show baremetal port details. Usage: Table 11.328. Positional arguments Value Summary <id> Uuid of the port (or mac address if --address is specified). Table 11.329. Command arguments Value Summary -h, --help Show this help message and exit --address <id> is the mac address (instead of the uuid) of the port. --fields <field> [<field> ... ] One or more port fields. only these fields will be fetched from the server. Table 11.330. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.331. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.332. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.333. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.91. baremetal port unset Unset baremetal port properties. Usage: Table 11.334. Positional arguments Value Summary <port> Uuid of the port. Table 11.335. Command arguments Value Summary -h, --help Show this help message and exit --extra <key> Extra to unset on this baremetal port (repeat option to unset multiple extras) --port-group Remove port from the port group --physical-network Unset the physical network on this baremetal port. --is-smartnic Set port as not smart nic port 11.92. baremetal volume connector create Create a new baremetal volume connector. Usage: Table 11.336. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this volume connector belongs to. --type <type> Type of the volume connector. can be iqn , ip , mac , wwnn , wwpn , port , portgroup . --connector-id <connector id> Id of the volume connector in the specified type. for example, the iSCSI initiator IQN for the node if the type is iqn . --uuid <uuid> Uuid of the volume connector. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. Table 11.337. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.338. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.339. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.340. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.93. baremetal volume connector delete Unregister baremetal volume connector(s). Usage: Table 11.341. Positional arguments Value Summary <volume connector> Uuid(s) of the volume connector(s) to delete. Table 11.342. Command arguments Value Summary -h, --help Show this help message and exit 11.94. baremetal volume connector list List baremetal volume connectors. Usage: Table 11.343. Command arguments Value Summary -h, --help Show this help message and exit --node <node> Only list volume connectors of this node (name or UUID). --limit <limit> Maximum number of volume connectors to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <volume connector> Volume connector uuid (for example, of the last volume connector in the list from a request). Returns the list of volume connectors after this UUID. --sort <key>[:<direction>] Sort output by specified volume connector fields and directions (asc or desc) (default:asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about volume connectors. --fields <field> [<field> ... ] One or more volume connector fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 11.344. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.345. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.346. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.347. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.95. baremetal volume connector set Set baremetal volume connector properties. Usage: Table 11.348. Positional arguments Value Summary <volume connector> Uuid of the volume connector. Table 11.349. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this volume connector belongs to. --type <type> Type of the volume connector. can be iqn , ip , mac , wwnn , wwpn , port , portgroup . --connector-id <connector id> Id of the volume connector in the specified type. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. 11.96. baremetal volume connector show Show baremetal volume connector details. Usage: Table 11.350. Positional arguments Value Summary <id> Uuid of the volume connector. Table 11.351. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more volume connector fields. only these fields will be fetched from the server. Table 11.352. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.353. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.354. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.355. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.97. baremetal volume connector unset Unset baremetal volume connector properties. Usage: Table 11.356. Positional arguments Value Summary <volume connector> Uuid of the volume connector. Table 11.357. Command arguments Value Summary -h, --help Show this help message and exit --extra <key> Extra to unset (repeat option to unset multiple extras) 11.98. baremetal volume target create Create a new baremetal volume target. Usage: Table 11.358. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this volume target belongs to. --type <volume type> Type of the volume target, e.g. iscsi , fibre_channel . --property <key=value> Key/value property related to the type of this volume target. Can be specified multiple times. --boot-index <boot index> Boot index of the volume target. --volume-id <volume id> Id of the volume associated with this target. --uuid <uuid> Uuid of the volume target. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. Table 11.359. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.360. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.361. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.362. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.99. baremetal volume target delete Unregister baremetal volume target(s). Usage: Table 11.363. Positional arguments Value Summary <volume target> Uuid(s) of the volume target(s) to delete. Table 11.364. Command arguments Value Summary -h, --help Show this help message and exit 11.100. baremetal volume target list List baremetal volume targets. Usage: Table 11.365. Command arguments Value Summary -h, --help Show this help message and exit --node <node> Only list volume targets of this node (name or uuid). --limit <limit> Maximum number of volume targets to return per request, 0 for no limit. Default is the maximum number used by the Baremetal API Service. --marker <volume target> Volume target uuid (for example, of the last volume target in the list from a request). Returns the list of volume targets after this UUID. --sort <key>[:<direction>] Sort output by specified volume target fields and directions (asc or desc) (default:asc). Multiple fields and directions can be specified, separated by comma. --long Show detailed information about volume targets. --fields <field> [<field> ... ] One or more volume target fields. only these fields will be fetched from the server. Can not be used when --long is specified. Table 11.366. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 11.367. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 11.368. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.369. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.101. baremetal volume target set Set baremetal volume target properties. Usage: Table 11.370. Positional arguments Value Summary <volume target> Uuid of the volume target. Table 11.371. Command arguments Value Summary -h, --help Show this help message and exit --node <uuid> Uuid of the node that this volume target belongs to. --type <volume type> Type of the volume target, e.g. iscsi , fibre_channel . --property <key=value> Key/value property related to the type of this volume target. Can be specified multiple times. --boot-index <boot index> Boot index of the volume target. --volume-id <volume id> Id of the volume associated with this target. --extra <key=value> Record arbitrary key/value metadata. can be specified multiple times. 11.102. baremetal volume target show Show baremetal volume target details. Usage: Table 11.372. Positional arguments Value Summary <id> Uuid of the volume target. Table 11.373. Command arguments Value Summary -h, --help Show this help message and exit --fields <field> [<field> ... ] One or more volume target fields. only these fields will be fetched from the server. Table 11.374. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 11.375. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 11.376. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 11.377. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 11.103. baremetal volume target unset Unset baremetal volume target properties. Usage: Table 11.378. Positional arguments Value Summary <volume target> Uuid of the volume target. Table 11.379. Command arguments Value Summary -h, --help Show this help message and exit --extra <key> Extra to unset (repeat option to unset multiple extras) --property <key> Property to unset on this baremetal volume target (repeat option to unset multiple properties).
[ "openstack baremetal allocation create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--resource-class RESOURCE_CLASS] [--trait TRAITS] [--candidate-node CANDIDATE_NODES] [--name NAME] [--uuid UUID] [--owner OWNER] [--extra <key=value>] [--wait [<time-out>]] [--node NODE]", "openstack baremetal allocation delete [-h] <allocation> [<allocation> ...]", "openstack baremetal allocation list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <allocation>] [--sort <key>[:<direction>]] [--node <node>] [--resource-class <resource_class>] [--state <state>] [--owner <owner>] [--long | --fields <field> [<field> ...]]", "openstack baremetal allocation set [-h] [--name <name>] [--extra <key=value>] <allocation>", "openstack baremetal allocation show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <id>", "openstack baremetal allocation unset [-h] [--name] [--extra <key>] <allocation>", "openstack baremetal chassis create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--extra <key=value>] [--uuid <uuid>]", "openstack baremetal chassis delete [-h] <chassis> [<chassis> ...]", "openstack baremetal chassis list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--fields <field> [<field> ...]] [--limit <limit>] [--long] [--marker <chassis>] [--sort <key>[:<direction>]]", "openstack baremetal chassis set [-h] [--description <description>] [--extra <key=value>] <chassis>", "openstack baremetal chassis show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <chassis>", "openstack baremetal chassis unset [-h] [--description] [--extra <key>] <chassis>", "openstack baremetal conductor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <conductor>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]", "openstack baremetal conductor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <conductor>", "openstack baremetal create [-h] <file> [<file> ...]", "openstack baremetal deploy template create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--uuid <uuid>] [--extra <key=value>] --steps <steps> <name>", "openstack baremetal deploy template delete [-h] <template> [<template> ...]", "openstack baremetal deploy template list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <template>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]", "openstack baremetal deploy template set [-h] [--name <name>] [--steps <steps>] [--extra <key=value>] <template>", "openstack baremetal deploy template show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <template>", "openstack baremetal deploy template unset [-h] [--extra <key>] <template>", "openstack baremetal driver list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--type <type>] [--long]", "openstack baremetal driver passthru call [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--arg <key=value>] [--http-method <http-method>] <driver> <method>", "openstack baremetal driver passthru list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <driver>", "openstack baremetal driver property list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <driver>", "openstack baremetal driver raid property list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <driver>", "openstack baremetal driver show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <driver>", "openstack baremetal introspection abort [-h] node", "openstack baremetal introspection data save [-h] [--file <filename>] [--unprocessed] node", "openstack baremetal introspection interface list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--vlan VLAN] [--long | --fields <field> [<field> ...]] node_ident", "openstack baremetal introspection interface show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] node_ident interface", "openstack baremetal introspection list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker MARKER] [--limit LIMIT]", "openstack baremetal introspection reprocess [-h] node", "openstack baremetal introspection rule delete [-h] uuid", "openstack baremetal introspection rule import [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] file", "openstack baremetal introspection rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack baremetal introspection rule purge [-h]", "openstack baremetal introspection rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] uuid", "openstack baremetal introspection start [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] [--check-errors] node [node ...]", "openstack baremetal introspection status [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] node", "openstack baremetal node abort [-h] <node>", "openstack baremetal node add trait [-h] <node> <trait> [<trait> ...]", "openstack baremetal node adopt [-h] [--wait [<time-out>]] <node>", "openstack baremetal node bios setting list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>", "openstack baremetal node bios setting show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <node> <setting name>", "openstack baremetal node boot device set [-h] [--persistent] <node> <device>", "openstack baremetal node boot device show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--supported] <node>", "openstack baremetal node clean [-h] [--wait [<time-out>]] --clean-steps <clean-steps> <node>", "openstack baremetal node console disable [-h] <node>", "openstack baremetal node console enable [-h] <node>", "openstack baremetal node console show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <node>", "openstack baremetal node create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--chassis-uuid <chassis>] --driver <driver> [--driver-info <key=value>] [--property <key=value>] [--extra <key=value>] [--uuid <uuid>] [--name <name>] [--bios-interface <bios_interface>] [--boot-interface <boot_interface>] [--console-interface <console_interface>] [--deploy-interface <deploy_interface>] [--inspect-interface <inspect_interface>] [--management-interface <management_interface>] [--network-data <network data>] [--network-interface <network_interface>] [--power-interface <power_interface>] [--raid-interface <raid_interface>] [--rescue-interface <rescue_interface>] [--storage-interface <storage_interface>] [--vendor-interface <vendor_interface>] [--resource-class <resource_class>] [--conductor-group <conductor_group>] [--automated-clean | --no-automated-clean] [--owner <owner>] [--lessee <lessee>] [--description <description>]", "openstack baremetal node delete [-h] <node> [<node> ...]", "openstack baremetal node deploy [-h] [--wait [<time-out>]] [--config-drive <config-drive>] [--deploy-steps <deploy-steps>] <node>", "openstack baremetal node inject nmi [-h] <node>", "openstack baremetal node inspect [-h] [--wait [<time-out>]] <node>", "openstack baremetal node list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <node>] [--sort <key>[:<direction>]] [--maintenance | --no-maintenance] [--retired | --no-retired] [--fault <fault>] [--associated | --unassociated] [--provision-state <provision state>] [--driver <driver>] [--resource-class <resource class>] [--conductor-group <conductor_group>] [--conductor <conductor>] [--chassis <chassis UUID>] [--owner <owner>] [--lessee <lessee>] [--description-contains <description_contains>] [--long | --fields <field> [<field> ...]]", "openstack baremetal node maintenance set [-h] [--reason <reason>] <node>", "openstack baremetal node maintenance unset [-h] <node>", "openstack baremetal node manage [-h] [--wait [<time-out>]] <node>", "openstack baremetal node passthru call [-h] [--arg <key=value>] [--http-method <http-method>] <node> <method>", "openstack baremetal node passthru list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>", "openstack baremetal node power off [-h] [--power-timeout <power-timeout>] [--soft] <node>", "openstack baremetal node power on [-h] [--power-timeout <power-timeout>] <node>", "openstack baremetal node provide [-h] [--wait [<time-out>]] <node>", "openstack baremetal node reboot [-h] [--soft] [--power-timeout <power-timeout>] <node>", "openstack baremetal node rebuild [-h] [--wait [<time-out>]] [--config-drive <config-drive>] [--deploy-steps <deploy-steps>] <node>", "openstack baremetal node remove trait [-h] [--all] <node> [<trait> ...]", "openstack baremetal node rescue [-h] [--wait [<time-out>]] --rescue-password <rescue-password> <node>", "openstack baremetal node set [-h] [--instance-uuid <uuid>] [--name <name>] [--chassis-uuid <chassis UUID>] [--driver <driver>] [--bios-interface <bios_interface> | --reset-bios-interface] [--boot-interface <boot_interface> | --reset-boot-interface] [--console-interface <console_interface> | --reset-console-interface] [--deploy-interface <deploy_interface> | --reset-deploy-interface] [--inspect-interface <inspect_interface> | --reset-inspect-interface] [--management-interface <management_interface> | --reset-management-interface] [--network-interface <network_interface> | --reset-network-interface] [--network-data <network data>] [--power-interface <power_interface> | --reset-power-interface] [--raid-interface <raid_interface> | --reset-raid-interface] [--rescue-interface <rescue_interface> | --reset-rescue-interface] [--storage-interface <storage_interface> | --reset-storage-interface] [--vendor-interface <vendor_interface> | --reset-vendor-interface] [--reset-interfaces] [--resource-class <resource_class>] [--conductor-group <conductor_group>] [--automated-clean | --no-automated-clean] [--protected] [--protected-reason <protected_reason>] [--retired] [--retired-reason <retired_reason>] [--target-raid-config <target_raid_config>] [--property <key=value>] [--extra <key=value>] [--driver-info <key=value>] [--instance-info <key=value>] [--owner <owner>] [--lessee <lessee>] [--description <description>] <node>", "openstack baremetal node show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--instance] [--fields <field> [<field> ...]] <node>", "openstack baremetal node trait list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>", "openstack baremetal node undeploy [-h] [--wait [<time-out>]] <node>", "openstack baremetal node unrescue [-h] [--wait [<time-out>]] <node>", "openstack baremetal node unset [-h] [--instance-uuid] [--name] [--resource-class] [--target-raid-config] [--property <key>] [--extra <key>] [--driver-info <key>] [--instance-info <key>] [--chassis-uuid] [--bios-interface] [--boot-interface] [--console-interface] [--deploy-interface] [--inspect-interface] [--network-data] [--management-interface] [--network-interface] [--power-interface] [--raid-interface] [--rescue-interface] [--storage-interface] [--vendor-interface] [--conductor-group] [--automated-clean] [--protected] [--protected-reason] [--retired] [--retired-reason] [--owner] [--lessee] [--description] <node>", "openstack baremetal node validate [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>", "openstack baremetal node vif attach [-h] [--port-uuid <port-uuid>] [--vif-info <key=value>] <node> <vif-id>", "openstack baremetal node vif detach [-h] <node> <vif-id>", "openstack baremetal node vif list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <node>", "openstack baremetal port create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --node <uuid> [--uuid <uuid>] [--extra <key=value>] [--local-link-connection <key=value>] [-l <key=value>] [--pxe-enabled <boolean>] [--port-group <uuid>] [--physical-network <physical network>] [--is-smartnic] <address>", "openstack baremetal port delete [-h] <port> [<port> ...]", "openstack baremetal port group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --node <uuid> [--address <mac-address>] [--name NAME] [--uuid UUID] [--extra <key=value>] [--mode MODE] [--property <key=value>] [--support-standalone-ports | --unsupport-standalone-ports]", "openstack baremetal port group delete [-h] <port group> [<port group> ...]", "openstack baremetal port group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <limit>] [--marker <port group>] [--sort <key>[:<direction>]] [--address <mac-address>] [--node <node>] [--long | --fields <field> [<field> ...]]", "openstack baremetal port group set [-h] [--node <uuid>] [--address <mac-address>] [--name <name>] [--extra <key=value>] [--mode MODE] [--property <key=value>] [--support-standalone-ports | --unsupport-standalone-ports] <port group>", "openstack baremetal port group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--address] [--fields <field> [<field> ...]] <id>", "openstack baremetal port group unset [-h] [--name] [--address] [--extra <key>] [--property <key>] <port group>", "openstack baremetal port list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--address <mac-address>] [--node <node>] [--port-group <port group>] [--limit <limit>] [--marker <port>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]", "openstack baremetal port set [-h] [--node <uuid>] [--address <address>] [--extra <key=value>] [--port-group <uuid>] [--local-link-connection <key=value>] [--pxe-enabled | --pxe-disabled] [--physical-network <physical network>] [--is-smartnic] <port>", "openstack baremetal port show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--address] [--fields <field> [<field> ...]] <id>", "openstack baremetal port unset [-h] [--extra <key>] [--port-group] [--physical-network] [--is-smartnic] <port>", "openstack baremetal volume connector create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --node <uuid> --type <type> --connector-id <connector id> [--uuid <uuid>] [--extra <key=value>]", "openstack baremetal volume connector delete [-h] <volume connector> [<volume connector> ...]", "openstack baremetal volume connector list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--node <node>] [--limit <limit>] [--marker <volume connector>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]", "openstack baremetal volume connector set [-h] [--node <uuid>] [--type <type>] [--connector-id <connector id>] [--extra <key=value>] <volume connector>", "openstack baremetal volume connector show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <id>", "openstack baremetal volume connector unset [-h] [--extra <key>] <volume connector>", "openstack baremetal volume target create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --node <uuid> --type <volume type> [--property <key=value>] --boot-index <boot index> --volume-id <volume id> [--uuid <uuid>] [--extra <key=value>]", "openstack baremetal volume target delete [-h] <volume target> [<volume target> ...]", "openstack baremetal volume target list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--node <node>] [--limit <limit>] [--marker <volume target>] [--sort <key>[:<direction>]] [--long | --fields <field> [<field> ...]]", "openstack baremetal volume target set [-h] [--node <uuid>] [--type <volume type>] [--property <key=value>] [--boot-index <boot index>] [--volume-id <volume id>] [--extra <key=value>] <volume target>", "openstack baremetal volume target show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fields <field> [<field> ...]] <id>", "openstack baremetal volume target unset [-h] [--extra <key>] [--property <key>] <volume target>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/baremetal
Chapter 1. Overview
Chapter 1. Overview Note The Ruby software development kit (SDK) is deprecated. Support for Ruby SDK will be removed in a later release. The Ruby software development kit is a Ruby gem that allows you to interact with the Red Hat Virtualization Manager in Ruby projects. By downloading these classes and adding them to your project, you can access a range of functionality for high-level automation of administrative tasks. 1.1. Prerequisites To install the Ruby software development kit, you must have: A system with Red Hat Enterprise Linux 8 installed. Both the Server and Workstation variants are supported. A subscription to Red Hat Virtualization entitlements.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/chap-overview
Part I. Creating Red Hat Decision Manager business applications with Spring Boot
Part I. Creating Red Hat Decision Manager business applications with Spring Boot As a developer, you can create Red Hat Decision Manager Spring Boot business applications through Maven archetype commands, configure those applications, and deploy them to an existing service or in the cloud.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/assembly-springboot-business-apps
Chapter 23. KafkaAuthorizationKeycloak schema reference
Chapter 23. KafkaAuthorizationKeycloak schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the KafkaAuthorizationKeycloak type from KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationCustom . It must have the value keycloak for the type KafkaAuthorizationKeycloak . Property Property type Description type string Must be keycloak . clientId string OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. tokenEndpointUri string Authorization server token endpoint URI. tlsTrustedCertificates CertSecretSource array Trusted certificates for TLS connection to the OAuth server. disableTlsHostnameVerification boolean Enable or disable TLS hostname verification. Default value is false . delegateToKafkaAcls boolean Whether authorization decision should be delegated to the 'Simple' authorizer if DENIED by Red Hat build of Keycloak Authorization Services policies. Default value is false . grantsRefreshPeriodSeconds integer The time between two consecutive grants refresh runs in seconds. The default value is 60. grantsRefreshPoolSize integer The number of threads to use to refresh grants for active sessions. The more threads, the more parallelism, so the sooner the job completes. However, using more threads places a heavier load on the authorization server. The default value is 5. grantsMaxIdleTimeSeconds integer The time, in seconds, after which an idle grant can be evicted from the cache. The default value is 300. grantsGcPeriodSeconds integer The time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300. grantsAlwaysLatest boolean Controls whether the latest grants are fetched for a new session. When enabled, grants are retrieved from Red Hat build of Keycloak and cached for the user. The default value is false . superUsers string array List of super users. Should contain list of user principals which should get unlimited access rights. connectTimeoutSeconds integer The connect timeout in seconds when connecting to authorization server. If not set, the effective connect timeout is 60 seconds. readTimeoutSeconds integer The read timeout in seconds when connecting to authorization server. If not set, the effective read timeout is 60 seconds. httpRetries integer The maximum number of retries to attempt if an initial HTTP request fails. If not set, the default is to not attempt any retries. enableMetrics boolean Enable or disable OAuth metrics. The default value is false . includeAcceptHeader boolean Whether the Accept header should be set in requests to the authorization servers. The default value is true .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaauthorizationkeycloak-reference
Chapter 3. Using node disruption policies to minimize disruption from machine config changes
Chapter 3. Using node disruption policies to minimize disruption from machine config changes By default, when you make certain changes to the fields in a MachineConfig object, the Machine Config Operator (MCO) drains and reboots the nodes associated with that machine config. However, you can create a node disruption policy that defines a set of changes to some Ignition config objects that would require little or no disruption to your workloads. A node disruption policy allows you to define the configuration changes that cause a disruption to your cluster, and which changes do not. This allows you to reduce node downtime when making small machine configuration changes in your cluster. To configure the policy, you modify the MachineConfiguration object, which is in the openshift-machine-config-operator namespace. See the example node disruption policies in the MachineConfiguration objects that follow. Note There are machine configuration changes that always require a reboot, regardless of any node disruption policies. For more information, see About the Machine Config Operator . After you create the node disruption policy, the MCO validates the policy to search for potential issues in the file, such as problems with formatting. The MCO then merges the policy with the cluster defaults and populates the status.nodeDisruptionPolicyStatus fields in the machine config with the actions to be performed upon future changes to the machine config. The configurations in your policy always overwrite the cluster defaults. Important The MCO does not validate whether a change can be successfully applied by your node disruption policy. Therefore, you are responsible to ensure the accuracy of your node disruption policies. For example, you can configure a node disruption policy so that sudo configurations do not require a node drain and reboot. Or, you can configure your cluster so that updates to sshd are applied with only a reload of that one service. Important The node disruption policy feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can control the behavior of the MCO when making the changes to the following Ignition configuration objects: configuration files : You add to or update the files in the /var or /etc directory. systemd units : You create and set the status of a systemd service or modify an existing systemd service. users and groups : You change SSH keys in the passwd section postinstallation. ICSP , ITMS , IDMS objects: You can remove mirroring rules from an ImageContentSourcePolicy (ICSP), ImageTagMirrorSet (ITMS), and ImageDigestMirrorSet (IDMS) object. When you make any of these changes, the node disruption policy determines which of the following actions are required when the MCO implements the changes: Reboot : The MCO drains and reboots the nodes. This is the default behavior. None : The MCO does not drain or reboot the nodes. The MCO applies the changes with no further action. Drain : The MCO cordons and drains the nodes of their workloads. The workloads restart with the new configurations. Reload : For services, the MCO reloads the specified services without restarting the service. Restart : For services, the MCO fully restarts the specified services. DaemonReload : The MCO reloads the systemd manager configuration. Special : This is an internal MCO-only action and cannot be set by the user. Note The Reboot and None actions cannot be used with any other actions, as the Reboot and None actions override the others. Actions are applied in the order that they are set in the node disruption policy list. If you make other machine config changes that do require a reboot or other disruption to the nodes, that reboot supercedes the node disruption policy actions. 3.1. Example node disruption policies The following example MachineConfiguration objects contain a node disruption policy. Tip A MachineConfiguration object and a MachineConfig object are different objects. A MachineConfiguration object is a singleton object in the MCO namespace that contains configuration parameters for the MCO operator. A MachineConfig object defines changes that are applied to a machine config pool. The following example MachineConfiguration object shows no user defined policies. The default node disruption policy values are shown in the status stanza. Default node disruption policy apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: logLevel: Normal managementState: Managed operatorLogLevel: Normal status: nodeDisruptionPolicyStatus: clusterPolicies: files: - actions: - type: None path: /etc/mco/internal-registry-pull-secret.json - actions: - type: None path: /var/lib/kubelet/config.json - actions: - reload: serviceName: crio.service type: Reload path: /etc/machine-config-daemon/no-reboot/containers-gpg.pub - actions: - reload: serviceName: crio.service type: Reload path: /etc/containers/policy.json - actions: - type: Special path: /etc/containers/registries.conf sshkey: actions: - type: None readyReplicas: 0 In the following example, when changes are made to the SSH keys, the MCO drains the cluster nodes, reloads the crio.service , reloads the systemd configuration, and restarts the crio-service . Example node disruption policy for an SSH key change apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: sshkey: actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart # ... In the following example, when changes are made to the /etc/chrony.conf file, the MCO restarts the chronyd.service on the cluster nodes. Example node disruption policy for a configuration file change apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: files: - actions: - restart: serviceName: chronyd.service type: Restart path: /etc/chrony.conf In the following example, when changes are made to the auditd.service systemd unit, the MCO drains the cluster nodes, reloads the crio.service , reloads the systemd manager configuration, and restarts the crio.service . Example node disruption policy for a systemd unit change apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: units: - name: auditd.service actions: - type: Drain - type: Reload reload: serviceName: crio.service - type: DaemonReload - type: Restart restart: serviceName: crio.service In the following example, when changes are made to the registries.conf file, such as by editing an ImageContentSourcePolicy (ICSP) object, the MCO does not drain or reboot the nodes and applies the changes with no further action. Example node disruption policy for a registries.conf file change apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: files: - actions: - type: None path: /etc/containers/registries.conf 3.2. Configuring node restart behaviors upon machine config changes You can create a node disruption policy to define the machine configuration changes that cause a disruption to your cluster, and which changes do not. You can control how your nodes respond to changes in the files in the /var or /etc directory, the systemd units, the SSH keys, and the registries.conf file. When you make any of these changes, the node disruption policy determines which of the following actions are required when the MCO implements the changes: Reboot : The MCO drains and reboots the nodes. This is the default behavior. None : The MCO does not drain or reboot the nodes. The MCO applies the changes with no further action. Drain : The MCO cordons and drains the nodes of their workloads. The workloads restart with the new configurations. Reload : For services, the MCO reloads the specified services without restarting the service. Restart : For services, the MCO fully restarts the specified services. DaemonReload : The MCO reloads the systemd manager configuration. Special : This is an internal MCO-only action and cannot be set by the user. Note The Reboot and None actions cannot be used with any other actions, as the Reboot and None actions override the others. Actions are applied in the order that they are set in the node disruption policy list. If you make other machine config changes that do require a reboot or other disruption to the nodes, that reboot supercedes the node disruption policy actions. Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. For more information, see "Enabling features using feature gates". Warning Enabling the TechPreviewNoUpgrade feature set on your cluster prevents minor version updates. The TechPreviewNoUpgrade feature set cannot be disabled. Do not enable this feature set on production clusters. Procedure Edit the machineconfigurations.operator.openshift.io object to define the node disruption policy: USD oc edit MachineConfiguration cluster -n openshift-machine-config-operator Add a node disruption policy similar to the following: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster # ... spec: nodeDisruptionPolicy: 1 files: 2 - actions: 3 - restart: 4 serviceName: chronyd.service 5 type: Restart path: /etc/chrony.conf 6 sshkey: 7 actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart units: 8 - actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart name: test.service 1 Specifies the node disruption policy. 2 Specifies a list of machine config file definitions and actions to take to changes on those paths. This list supports a maximum of 50 entries. 3 Specifies the series of actions to be executed upon changes to the specified files. Actions are applied in the order that they are set in this list. This list supports a maximum of 10 entries. 4 Specifies that the listed service is to be reloaded upon changes to the specified files. 5 Specifies the full name of the service to be acted upon. 6 Specifies the location of a file that is managed by a machine config. The actions in the policy apply when changes are made to the file in path . 7 Specifies a list of service names and actions to take upon changes to the SSH keys in the cluster. 8 Specifies a list of systemd unit names and actions to take upon changes to those units. Verification View the MachineConfiguration object file that you created: Example output apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: labels: machineconfiguration.openshift.io/role: worker name: cluster # ... status: nodeDisruptionPolicyStatus: 1 clusterPolicies: files: # ... - actions: - restart: serviceName: chronyd.service type: Restart path: /etc/chrony.conf sshkey: actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart units: - actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart name: test.se # ... 1 Specifies the current cluster-validated policies.
[ "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: logLevel: Normal managementState: Managed operatorLogLevel: Normal status: nodeDisruptionPolicyStatus: clusterPolicies: files: - actions: - type: None path: /etc/mco/internal-registry-pull-secret.json - actions: - type: None path: /var/lib/kubelet/config.json - actions: - reload: serviceName: crio.service type: Reload path: /etc/machine-config-daemon/no-reboot/containers-gpg.pub - actions: - reload: serviceName: crio.service type: Reload path: /etc/containers/policy.json - actions: - type: Special path: /etc/containers/registries.conf sshkey: actions: - type: None readyReplicas: 0", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: sshkey: actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: files: - actions: - restart: serviceName: chronyd.service type: Restart path: /etc/chrony.conf", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: units: - name: auditd.service actions: - type: Drain - type: Reload reload: serviceName: crio.service - type: DaemonReload - type: Restart restart: serviceName: crio.service", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: files: - actions: - type: None path: /etc/containers/registries.conf", "oc edit MachineConfiguration cluster -n openshift-machine-config-operator", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster spec: nodeDisruptionPolicy: 1 files: 2 - actions: 3 - restart: 4 serviceName: chronyd.service 5 type: Restart path: /etc/chrony.conf 6 sshkey: 7 actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart units: 8 - actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart name: test.service", "oc get MachineConfiguration/cluster -o yaml", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: labels: machineconfiguration.openshift.io/role: worker name: cluster status: nodeDisruptionPolicyStatus: 1 clusterPolicies: files: - actions: - restart: serviceName: chronyd.service type: Restart path: /etc/chrony.conf sshkey: actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart units: - actions: - type: Drain - reload: serviceName: crio.service type: Reload - type: DaemonReload - restart: serviceName: crio.service type: Restart name: test.se" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_configuration/machine-config-node-disruption_machine-configs-configure
Chapter 34. Servers and Services
Chapter 34. Servers and Services Internal buffer locks no longer cause deadlocks in libdb Previously, the libdb database did not lock its internal buffers in the correct order when it accessed pages located in an off-page duplicate (OPD) tree while processing operations on a cursor. A writer process accessed first the primary tree and then the OPD tree while a reader process did the same in the reverse order. When a writer process accessed a page from the primary tree while a reader process accessed a page from the OPD tree, the processes were unable to access the page from the other tree because the pages were simultaneously locked by the other process. This consequently caused a deadlock in libdb because neither of the processes released their locks. With this update, the btree version of the cursor->get method has been modified to lock the tree's pages in the same order as the writing methods, that is, the primary tree first and the OPD tree second. As a result, deadlocks in libdb no longer occur in the described scenario. (BZ# 1349779 ) Weekly log rotations are now triggered more predictably Weekly log rotations were previously performed by the logrotate utility when exactly 7 days (604800 seconds) elapsed since the last rotation. Consequently, if the logrotate command was triggered by a cron job slightly sooner, the rotation was delayed until the run. With this update, weekly log rotations ignore the exact time. As a result, when the day counter advances by 7 or more days since the last rotation, a new rotation is triggered. (BZ# 1465720 ) ghostscript no longer crashes while processing large PDF files Previously, processing large PDF files could cause the ghostscript utility to terminate unexpectedly under certain rare circumstances. With this update, an internal ghostscript virtual machine limit, DEFAULT_VM_THRESHOLD , has been increased, and the described problem no longer occurs. In addition, processing of large files is now slightly faster. (BZ#1479852) Converting large PDF files to PNG with ghostscript no longer fails Due to a bug in the upstream source code, converting large PDF files to the PNG format using the ghostscript utility failed under certain rare circumstances. This bug has been fixed, and the described problem no longer occurs. (BZ#1473337) krfb no longer crashes when unable to bind to an IPv6 port Previously, connecting to the krfb application with a VNC client when krfb could not bind to an IPv6 port, krfb terminated unexpectedly. This update fixes the improper handling of uninitialized IPv6 socket, and applications built on the libvncserver library now deal with the unsuccessful attempt to listen on an IPv6 port correctly. (BZ#1314814) mod_nss properly detects the threading model in Apache to improve performance Previously, the mod_nss module was not detecting the threading model properly in Apache. Consequently, users experienced slower performance because the TLS Session ID was not maintained across handshakes and a new session ID was generated for each handshake. This update fixes the threading model detection. As a result, TLS Session IDs are now properly cached, which eliminates the described performance problems. (BZ# 1461580 ) atd no longer runs with 100% CPU utilization nor fills system log Previously, the atd daemon of the at utility handled incorrectly some types of broken jobs, particularly jobs of non-existent users. As a consequence, atd used up all available CPU resources and filled the system log by messages sent with unlimited frequency. With this update, the handling of the broken jobs by atd has been fixed and the problem does not occur anymore. (BZ#1481355) ReaR now provides a more helpful error message when grub2-efi-x64-modules is missing Previously, an attempt to create a ReaR backup on UEFI systems using the rear mkrescue and rear mkbackup commands failed due to a missing grub2-efi-x64-modules package, which is not installed by default but is required by ReaR to generate a GRUB image. The commands failed with the following error message: This message proved to be confusing and unhelpful. With this update, the error will still appear in the same circumstances, but it will point out how to fix the problem: As the updated message explains, you must install the missing grub2-efi-x64-modules package before you can create a ReaR backup on a system with UEFI firmware. (BZ# 1492177 ) ReaR no longer fails to determine disk size during a mkrescue operation Previously, the ReaR (Relax-and-Recover) utility sometimes encountered a failure while querying partition sizes when saving the disk layout due to a race condition with udev . As a consequence, the mkrescue operation failed with the following message: Therefore it was not possible to create the rescue image. The bug has been fixed, and rescue image creation now works as expected. (BZ# 1388653 ) ReaR no longer requires dosfsck and efibootmgr on non-UEFI systems Previously, ReaR (Relax-and-Recover) incorrectly required the dosfsck and efibootmgr utilities installed on systems that do not use UEFI. As a consequence, if the utilities were missing, the rear mkrescue command failed with an error. This bug has been fixed, and ReaR now requires the mentioned utilities to be installed only on UEFI systems. (BZ# 1479002 ) ReaR no longer fails with NetBackup and has more reliable network configuration Previously, two problems in the startup procedure of the rescue system caused the ReaR (Relax-and-Recover) restore process to fail when using the NetBackup method. The system's init scripts were sourced instead of executed when used by ReaR . As a consequence, the NetBackup init script aborted the system-setup process. Additionally, processes created by the system setup were immediately terminated. This affected the dhclient tool as well, and in some cases caused an IP address conflict. With this update, both bugs have been fixed. As a result, ReaR works properly with the NetBackup method, and network configuration using DHCP is more reliable. (BZ# 1506231 ) ReaR recovery no longer fails when backup integrity checking is enabled Previously, if ReaR (Relax-and-Recover) was configured to use backup integrity checking ( BACKUP_INTEGRITY_CHECK=1 ), the recovery process always failed because the md5sum command could not find the backup archive. This bug has been fixed, and the described problem no longer occurs. (BZ# 1532676 )
[ "ERROR: Error occurred during grub2-mkimage of BOOTX64.efi", "WARNING: /usr/lib/grub/x86_64-efi/moddep.lst not found, grub2-mkimage will likely fail. Please install the grub2-efi-x64-modules package to fix this.", "ERROR: BUG BUG BUG! Could not determine size of disk" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug_fixes_servers_and_services
Chapter 9. Using the Topic Operator to manage Kafka topics
Chapter 9. Using the Topic Operator to manage Kafka topics The KafkaTopic resource configures topics, including partition and replication factor settings. When you create, modify, or delete a topic using KafkaTopic , the Topic Operator ensures that these changes are reflected in the Kafka cluster. For more information on the KafkaTopic resource, see the KafkaTopic schema reference . 9.1. Topic management modes The KafkaTopic resource is responsible for managing a single topic within a Kafka cluster. The Topic Operator provides two modes for managing KafkaTopic resources and Kafka topics: Bidirectional mode Bidirectional mode requires ZooKeeper for cluster management. It is not compatible with using AMQ Streams in KRaft mode. (Preview) Unidirectional mode Unidirectional mode does not require ZooKeeper for cluster management. It is compatible with using AMQ Streams in KRaft mode. Note Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator feature gate to be able to use it. 9.1.1. Bidirectional topic management In bidirectional mode, the Topic Operator operates as follows: When a KafkaTopic is created, deleted, or changed, the Topic Operator performs the corresponding operation on the Kafka topic. Similarly, when a topic is created, deleted, or changed within the Kafka cluster, the Topic Operator performs the corresponding operation on the KafkaTopic resource. Tip Try to stick to one method of managing topics, either through the KafkaTopic resources or directly in Kafka. Avoid routinely switching between both methods for a given topic. 9.1.2. (Preview) Unidirectional topic management In unidirectional mode, the Topic Operator operates as follows: When a KafkaTopic is created, deleted, or changed, the Topic Operator performs the corresponding operation on the Kafka topic. If a topic is created, deleted, or modified directly within the Kafka cluster, without the presence of a corresponding KafkaTopic resource, the Topic Operator does not manage that topic. The Topic Operator will only manage Kafka topics associated with KafkaTopic resources and does not interfere with topics managed independently within the Kafka cluster. If a KafkaTopic does exist for a Kafka topic, any configuration changes made outside the resource are reverted. 9.2. Topic naming conventions A KafkaTopic resource includes a name for the topic and a label that identifies the name of the Kafka cluster it belongs to. Label identifying a Kafka cluster for topic handling apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster spec: topicName: topic-name-1 The label provides the cluster name of the Kafka resource. The Topic Operator uses the label as a mechanism for determining which KafkaTopic resources to manage. If the label does not match the Kafka cluster, the Topic Operator cannot see the KafkaTopic , and the topic is not created. Kafka and OpenShift have their own naming validation rules, and a Kafka topic name might not be a valid resource name in OpenShift. If possible, try and stick to a naming convention that works for both. Consider the following guidelines: Use topic names that reflect the nature of the topic Be concise and keep the name under 63 characters Use all lower case and hyphens Avoid special characters, spaces or symbols The KafkaTopic resource allows you to specify the Kafka topic name using the metadata.name field. However, if the desired Kafka topic name is not a valid OpenShift resource name, you can use the spec.topicName property to specify the actual name. The spec.topicName field is optional, and when it's absent, the Kafka topic name defaults to the metadata.name of the topic. When a topic is created, the topic name cannot be changed later. Example of supplying a valid Kafka topic name apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 1 spec: topicName: My.Topic.1 2 # ... 1 A valid topic name that works in OpenShift. 2 A Kafka topic name that uses upper case and periods, which are invalid in OpenShift. If more than one KafkaTopic resource refers to the same Kafka topic, the resource that was created first is considered to be the one managing the topic. The status of the newer resources is updated to indicate a conflict, and their Ready status is changed to False . If a Kafka client application, such as Kafka Streams, automatically creates topics with invalid OpenShift resource names, the Topic Operator generates a valid metadata.name when used in bidirectional mode. It replaces invalid characters and appends a hash to the name. However, this behavior does not apply in (preview) unidirectional mode. Example of replacing an invalid topic name apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic---c55e57fe2546a33f9e603caf57165db4072e827e # ... Note For more information on the requirements for identifiers and names in a cluster, refer to the OpenShift documentation Object Names and IDs . 9.3. Handling changes to topics How the Topic Operator handles changes to topics depends on the mode of topic management . For bidirectional topic management, configuration changes are synchronized between the Kafka topic and the KafkaTopic resource in both directions. Incompatible changes prioritize the Kafka configuration, and the KafkaTopic resource is adjusted accordingly. For unidirectional topic management (currently in preview), configuration changes only go in one direction: from the KafkaTopic resource to the Kafka topic. Any changes to a Kafka topic managed outside the KafkaTopic resource are reverted. 9.3.1. Topic store for bidirectional topic management For bidirectional topic management, the Topic Operator is capable of handling changes to topics when there is no single source of truth. The KafkaTopic resource and the Kafka topic can undergo independent modifications, where real-time observation of changes may not always be feasible, particularly when the Topic Operator is not operational. To handle this, the Topic Operator maintains a topic store that stores topic configuration information about each topic. It compares the state of the Kafka cluster and OpenShift with the topic store to determine the necessary changes for synchronization. This evaluation takes place during startup and at regular intervals while the Topic Operator is active. For example, if the Topic Operator is inactive, and a new KafkaTopic named my-topic is created, upon restart, the Topic Operator recognizes the absence of my-topic in the topic store. It recognizes that the KafkaTopic was created after its last operation. Consequently, the Topic Operator generates the corresponding Kafka topic and saves the metadata in the topic store. The topic store enables the Topic Operator to manage situations where the topic configuration is altered in both Kafka topics and KafkaTopic resources, as long as the changes are compatible. When Kafka topic configuration is updated or changes are made to the KafkaTopic custom resource, the topic store is updated after reconciling with the Kafka cluster, as long as the changes are compatible. The topic store is based on the Kafka Streams key-value mechanism, which uses Kafka topics to persist the state. Topic metadata is cached in-memory and accessed locally within the Topic Operator. Updates from operations applied to the local in-memory cache are persisted to a backup topic store on disk. The topic store is continually synchronized with updates from Kafka topics or OpenShift KafkaTopic custom resources. Operations are handled rapidly with the topic store set up this way, but should the in-memory cache crash it is automatically repopulated from the persistent storage. Internal topics support the handling of topic metadata in the topic store. __strimzi_store_topic Input topic for storing the topic metadata __strimzi-topic-operator-kstreams-topic-store-changelog Retains a log of compacted topic store values Warning Do not delete these topics, as they are essential to the running of the Topic Operator. 9.3.2. Migrating topic metadata from ZooKeeper to the topic store In releases of AMQ Streams, topic metadata was stored in ZooKeeper. The topic store removes this requirement, bringing the metadata into the Kafka cluster, and under the control of the Topic Operator. When upgrading to AMQ Streams 2.5, the transition to Topic Operator control of the topic store is seamless. Metadata is found and migrated from ZooKeeper, and the old store is deleted. 9.3.3. Downgrading to an AMQ Streams version that uses ZooKeeper to store topic metadata If you are reverting back to a version of AMQ Streams earlier than 1.7, which uses ZooKeeper for the storage of topic metadata, you still downgrade your Cluster Operator to the version, then downgrade Kafka brokers and client applications to the Kafka version as standard. However, you must also delete the topics that were created for the topic store using a kafka-topics command, specifying the bootstrap address of the Kafka cluster. For example: oc run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete The command must correspond to the type of listener and authentication used to access the Kafka cluster. The Topic Operator will reconstruct the ZooKeeper topic metadata from the state of the topics in Kafka. 9.3.4. Automatic creation of topics Applications can trigger the automatic creation of topics in the Kafka cluster. By default, the Kafka broker configuration auto.create.topics.enable is set to true , allowing the broker to create topics automatically when an application attempts to produce or consume from a non-existing topic. Applications might also use the Kafka AdminClient to automatically create topics. When an application is deployed along with its KafkaTopic resources, it is possible that automatic topic creation in the cluster happens before the Topic Operator can react to the KafkaTopic . For bidirectional topic management, the Topic Operator synchronizes the changes between the topics and KafkaTopic resources. If you are trying the unidirectional topic management preview, this can mean that the topics created for an application deployment are initially created with default topic configuration. If the Topic Operator attempts to reconfigure the topics based on KafkaTopic resource specifications included with the application deployment, the operation might fail because the required change to the configuration is not allowed. For example, if the change means lowering the number of topic partitions. For this reason, it is recommended to disable auto.create.topics.enable in the Kafka cluster configuration when using unidirectional topic management. 9.4. Configuring Kafka topics Use the properties of the KafkaTopic resource to configure Kafka topics. Changes made to topic configuration in the KafkaTopic are propagated to Kafka. You can use oc apply to create or modify topics, and oc delete to delete existing topics. For example: oc apply -f <topic_config_file> oc delete KafkaTopic <topic_name> To be able to delete topics, delete.topic.enable must be set to true (default) in the spec.kafka.config of the Kafka resource. This procedure shows how to create a topic with 10 partitions and 2 replicas. Note The procedure is the same for the bidirectional and (preview) unidirectional modes of topic management. Before you begin The KafkaTopic resource does not allow the following changes: Renaming the topic defined in spec.topicName . A mismatch between spec.topicName and status.topicName will be detected. Decreasing the number of partitions using spec.partitions (not supported by Kafka). Modifying the number of replicas specified in spec.replicas . Warning Increasing spec.partitions for topics with keys will alter the partitioning of records, which can cause issues, especially when the topic uses semantic partitioning. Prerequisites A running Kafka cluster configured with a Kafka broker listener using mTLS authentication and TLS encryption. A running Topic Operator (typically deployed with the Entity Operator). For deleting a topic, delete.topic.enable=true (default) in the spec.kafka.config of the Kafka resource. Procedure Configure the KafkaTopic resource. Example Kafka topic configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 Tip When modifying a topic, you can get the current version of the resource using oc get kafkatopic my-topic-1 -o yaml . Create the KafkaTopic resource in OpenShift. oc apply -f <topic_config_file> Wait for the ready status of the topic to change to True : oc get kafkatopics -o wide -w -n <namespace> Kafka topic status NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 my-topic-3 my-cluster 10 3 True Topic creation is successful when the READY output shows True . If the READY column stays blank, get more details on the status from the resource YAML or from the Topic Operator logs. Status messages provide details on the reason for the current status. oc get kafkatopics my-topic-2 -o yaml Details on a topic with a NotReady status # ... status: conditions: - lastTransitionTime: "2022-06-13T10:14:43.351550Z" message: Number of partitions cannot be decreased reason: PartitionDecreaseException status: "True" type: NotReady In this example, the reason the topic is not ready is because the original number of partitions was reduced in the KafkaTopic configuration. Kafka does not support this. After resetting the topic configuration, the status shows the topic is ready. oc get kafkatopics my-topic-2 -o wide -w -n <namespace> Status update of the topic NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-2 my-cluster 10 3 True Fetching the details shows no messages oc get kafkatopics my-topic-2 -o yaml Details on a topic with a READY status # ... status: conditions: - lastTransitionTime: '2022-06-13T10:15:03.761084Z' status: 'True' type: Ready 9.5. Configuring topics for replication and number of partitions The recommended configuration for topics managed by the Topic Operator is a topic replication factor of 3, and a minimum of 2 in-sync replicas. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 1 replicas: 3 2 config: min.insync.replicas: 2 3 #... 1 The number of partitions for the topic. 2 The number of replica topic partitions. Currently, this cannot be changed in the KafkaTopic resource, but it can be changed using the kafka-reassign-partitions.sh tool. 3 The minimum number of replica partitions that a message must be successfully written to, or an exception is raised. Note In-sync replicas are used in conjunction with the acks configuration for producer applications. The acks configuration determines the number of follower partitions a message must be replicated to before the message is acknowledged as successfully received. The bidirectional Topic Operator runs with acks=all for its internal topics whereby messages must be acknowledged by all in-sync replicas. When scaling Kafka clusters by adding or removing brokers, replication factor configuration is not changed and replicas are not reassigned automatically. However, you can use the kafka-reassign-partitions.sh tool to change the replication factor, and manually reassign replicas to brokers. Alternatively, though the integration of Cruise Control for AMQ Streams cannot change the replication factor for topics, the optimization proposals it generates for rebalancing Kafka include commands that transfer partition replicas and change partition leadership. Additional resources Downgrading AMQ Streams Section 19.1, "Partition reassignment tool overview" Chapter 18, Rebalancing clusters using Cruise Control 9.6. (Preview) Managing KafkaTopic resources without impacting Kafka topics This procedure describes how to convert Kafka topics that are currently managed through the KafkaTopic resource into non-managed topics. This capability can be useful in various scenarios. For instance, you might want to update the metadata.name of a KafkaTopic resource. You can only do that by deleting the original KafkaTopic resource and recreating a new one. By annotating a KafkaTopic resource with strimzi.io/managed=false , you indicate that the Topic Operator should no longer manage that particular topic. This allows you to retain the Kafka topic while making changes to the resource's configuration or other administrative tasks. You can perform this task if you are using unidirectional topic management. Note Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator feature gate to be able to use it. Prerequisites The Cluster Operator must be deployed. Procedure Annotate the KafkaTopic resource in OpenShift, setting strimzi.io/managed to false : oc annotate kafkatopic my-topic-1 strimzi.io/managed=false Specify the metadata.name of the topic in your KafkaTopic resource, which is my-topic-1 in this example. Check the status of the KafkaTopic resource to make sure the request was successful: oc get kafkatopics my-topic-1 -o yaml Example topic with a Ready status apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 124 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 # ... status: observedGeneration: 124 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z 1 Successful reconciliation of the resource means the topic is no longer managed. The value of metadata.generation (the current version of the deployment) must match status.observedGeneration (the latest reconciliation of the resource). You can now make changes to the KafkaTopic resource without it affecting the Kafka topic it was managing. For example, to change the metadata.name , do as follows: Delete the original KafkTopic resource: oc delete kafkatopic <kafka_topic_name> Recreate the KafkTopic resource with a different metadata.name , but use spec.topicName to refer to the same topic that was managed by the original If you haven't deleted the original KafkaTopic resource, and you wish to resume management of the Kafka topic again, set the strimzi.io/managed annotation to true or remove the annotation. 9.7. (Preview) Enabling topic management for existing Kafka topics This procedure describes how to enable topic management for topics that are not currently managed through the KafkaTopic resource. You do this by creating a matching KafkaTopic resource. You can perform this task if you are using unidirectional topic management. Note Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator feature gate to be able to use it. Prerequisites The Cluster Operator must be deployed. Procedure Create a KafkaTopic resource with a metadata.name that is the same as the Kafka topic. Or use spec.topicName if the name of the topic in Kafka would not be a legal OpenShift resource name. Example Kafka topic configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 In this example, the Kafka topic is named my-topic-1 . The Topic Operator checks whether the topic is managed by another KafkaTopic resource. If it is, the older resource takes precedence and a resource conflict error is returned in the status of the new resource. Apply the KafkaTopic resource: oc apply -f <topic_configuration_file> Wait for the operator to update the topic in Kafka. The operator updates the Kafka topic with the spec of the KafkaTopic that has the same name. Check the status of the KafkaTopic resource to make sure the request was successful: oc get kafkatopics my-topic-1 -o yaml Example topic with a Ready status apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 # ... status: observedGeneration: 1 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z 1 Successful reconciliation of the resource means the topic is now managed. The value of metadata.generation (the current version of the deployment) must match status.observedGeneration (the latest reconciliation of the resource). 9.8. (Preview) Deleting managed topics Unidirectional topic management supports the deletion of topics managed through the KafkaTopic resource with or without OpenShift finalizers. This is controlled by the STRIMZI_USE_FINALIZERS Topic Operator environment variable. By default, this is set to true , though it can be set to false in the Topic Operator env configuration if you do not want the Topic Operator to add finalizers. Note Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator feature gate to be able to use it. Finalizers ensure orderly and controlled deletion of KafkaTopic resources. A finalizer for the Topic Operator is added to the metadata of the KafkaTopic resource: Finalizer to control topic deletion apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster In this example, the finalizer is added for topic my-topic-1 . The finalizer prevents the topic from being fully deleted until the finalization process is complete. If you then delete the topic using oc delete kafkatopic my-topic-1 , a timestamp is added to the metadata: Finalizer timestamp on deletion apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster deletionTimestamp: 20230301T000000.000 The resource is still present. If the deletion fails, it is shown in the status of the resource. When the finalization tasks are successfully executed, the finalizer is removed from the metadata, and the resource is fully deleted. Finalizers also prevent related resources from being deleted. If the unidirectional Topic Operator is not running, it won't be able to remove the metadata.finalizer . Consequently, an attempt to delete the namespace that contains the KafkaTopic resource won't complete until either the operator is restarted, or the finalizer is otherwise removed (for example using oc edit ).
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster spec: topicName: topic-name-1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 1 spec: topicName: My.Topic.1 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic---c55e57fe2546a33f9e603caf57165db4072e827e #", "run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2", "apply -f <topic_config_file>", "get kafkatopics -o wide -w -n <namespace>", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 my-topic-3 my-cluster 10 3 True", "get kafkatopics my-topic-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-13T10:14:43.351550Z\" message: Number of partitions cannot be decreased reason: PartitionDecreaseException status: \"True\" type: NotReady", "get kafkatopics my-topic-2 -o wide -w -n <namespace>", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-2 my-cluster 10 3 True", "get kafkatopics my-topic-2 -o yaml", "status: conditions: - lastTransitionTime: '2022-06-13T10:15:03.761084Z' status: 'True' type: Ready", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 10 1 replicas: 3 2 config: min.insync.replicas: 2 3 #", "annotate kafkatopic my-topic-1 strimzi.io/managed=false", "get kafkatopics my-topic-1 -o yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 124 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 status: observedGeneration: 124 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z", "delete kafkatopic <kafka_topic_name>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2", "apply -f <topic_configuration_file>", "get kafkatopics my-topic-1 -o yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 status: observedGeneration: 1 1 topicName: my-topic-1 conditions: - type: Ready status: True lastTransitionTime: 20230301T103000Z", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: generation: 1 name: my-topic-1 finalizer: strimzi.io/topic-operator labels: strimzi.io/cluster: my-cluster deletionTimestamp: 20230301T000000.000" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/using-the-topic-operator-str
6.4.2.2. Adding the New Physical Volume to the Volume Group
6.4.2.2. Adding the New Physical Volume to the Volume Group Add /dev/sdd1 to the existing volume group myvg .
[ "vgextend myvg /dev/sdd1 Volume group \"myvg\" successfully extended pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 7.15G 10.00G /dev/sdb1 myvg lvm2 a- 17.15G 15.15G 2.00G /dev/sdc1 myvg lvm2 a- 17.15G 15.15G 2.00G /dev/sdd1 myvg lvm2 a- 17.15G 17.15G 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/add_pv_ex4
Chapter 3. Ceph Object Gateway and the S3 API
Chapter 3. Ceph Object Gateway and the S3 API As a developer, you can use a RESTful application programming interface (API) that is compatible with the Amazon S3 data access model. You can manage the buckets and objects stored in a Red Hat Ceph Storage cluster through the Ceph Object Gateway. 3.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.2. S3 limitations Important The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team. Maximum object size when using Amazon S3: Individual Amazon S3 objects can range in size from a minimum of 0B to a maximum of 5TB. The largest object that can be uploaded in a single PUT is 5GB. For objects larger than 100MB, you should consider using the Multipart Upload capability. Maximum metadata size when using Amazon S3: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes. The amount of data overhead Red Hat Ceph Storage cluster produces to store S3 objects and metadata: The estimate here is 200-300 bytes plus the length of the object name. Versioned objects consume additional space proportional to the number of versions. Also, transient overhead is produced during multi-part upload and other transactional updates, but these overheads are recovered during garbage collection. Additional Resources See the Red Hat Ceph Storage Developer Guide for details on the unsupported header fields . 3.3. Accessing the Ceph Object Gateway with the S3 API As a developer, you must configure access to the Ceph Object Gateway and the Secure Token Service (STS) before you can start using the Amazon S3 API. 3.3.1. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. A RESTful client. 3.3.2. S3 authentication Requests to the Ceph Object Gateway can be either authenticated or unauthenticated. Ceph Object Gateway assumes unauthenticated requests are sent by an anonymous user. Ceph Object Gateway supports canned ACLs. For most use cases, clients use existing open source libraries like the Amazon SDK's AmazonS3Client for Java, and Python Boto. With open source libraries you simply pass in the access key and secret key and the library builds the request header and authentication signature for you. However, you can create requests and sign them too. Authenticating a request requires including an access key and a base 64-encoded hash-based Message Authentication Code (HMAC) in the request before it is sent to the Ceph Object Gateway server. Ceph Object Gateway uses an S3-compatible authentication approach. Example In the above example, replace ACCESS_KEY with the value for the access key ID followed by a colon ( : ). Replace HASH_OF_HEADER_AND_SECRET with a hash of a canonicalized header string and the secret corresponding to the access key ID. Generate hash of header string and secret To generate the hash of the header string and secret: Get the value of the header string. Normalize the request header string into canonical form. Generate an HMAC using a SHA-1 hashing algorithm. Encode the hmac result as base-64. Normalize header To normalize the header into canonical form: Get all content- headers. Remove all content- headers except for content-type and content-md5 . Ensure the content- header names are lowercase. Sort the content- headers lexicographically. Ensure you have a Date header AND ensure the specified date uses GMT and not an offset. Get all headers beginning with x-amz- . Ensure that the x-amz- headers are all lowercase. Sort the x-amz- headers lexicographically. Combine multiple instances of the same field name into a single field and separate the field values with a comma. Replace white space and line breaks in header values with a single space. Remove white space before and after colons. Append a new line after each header. Merge the headers back into the request header. Replace the HASH_OF_HEADER_AND_SECRET with the base-64 encoded HMAC string. Additional Resources For additional details, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation. 3.3.3. Server-Side Encryption (SSE) The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 application programming interface (API). Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Red Hat Ceph Storage cluster in encrypted form. Note Red Hat does NOT support S3 object encryption of Static Large Object (SLO) or Dynamic Large Object (DLO). Currently, none of the Server-Side Encryption (SSE) modes have implemented support for CopyObject . It is currently being developed [BZ#2149450 ]. Important To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators can disable SSL during testing by setting the rgw_crypt_require_ssl configuration setting to false at runtime, using the ceph config set client.rgw command, and then restarting the Ceph Object Gateway instance. In a production environment, it might not be possible to send encrypted requests over SSL. In such a case, send requests using HTTP with server-side encryption. For information about how to configure HTTP with server-side encryption, see the Additional Resources section below. S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site. Previously, the replicas of such objects were corrupted by decryption You can use the radosgw-admin bucket resync encrypted multipart --bucket-name BUCKET_NAME command to identify multipart uploads using SSE. This command scans for the primary non-replicated copies of encrypted multipart objects. For each object, the LastModified timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that use SSE, run this command against every bucket in every zone after upgrading all the zones. This ensures that S3 multipart uploads using SSE replicates correctly in multi-site. There are two options for the management of encryption keys: Customer-provided Keys When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer's responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object. Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification. Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode. Key Management Service When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data. Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification. Important Currently, the only tested key management implementations are HashiCorp Vault, and OpenStack Barbican. However, OpenStack Barbican is a Technology Preview and is not supported for use in production systems. Additional Resources Amazon SSE-C Amazon SSE-KMS Configuring server-side encryption The HashiCorp Vault 3.3.4. S3 access control lists Ceph Object Gateway supports S3-compatible Access Control Lists (ACL) functionality. An ACL is a list of access grants that specify which operations a user can perform on a bucket or on an object. Each grant has a different meaning when applied to a bucket versus applied to an object: Table 3.1. User Operations Permission Bucket Object READ Grantee can list the objects in the bucket. Grantee can read the object. WRITE Grantee can write or delete objects in the bucket. N/A READ_ACP Grantee can read bucket ACL. Grantee can read the object ACL. WRITE_ACP Grantee can write bucket ACL. Grantee can write to the object ACL. FULL_CONTROL Grantee has full permissions for object in the bucket. Grantee can read or write to the object ACL. 3.3.5. Preparing access to the Ceph Object Gateway using S3 You have to follow some pre-requisites on the Ceph Object Gateway node before attempting to access the gateway server. Prerequisites Installation of the Ceph Object Gateway software. Root-level access to the Ceph Object Gateway node. Procedure As root , open port 8080 on the firewall: Add a wildcard to the DNS server that you are using for the gateway as mentioned in the Object Gateway Configuration and Administration Guide . You can also set up the gateway node for local DNS caching. To do so, execute the following steps: As root , install and setup dnsmasq : Replace IP_OF_GATEWAY_NODE and FQDN_OF_GATEWAY_NODE with the IP address and FQDN of the gateway node. As root , stop NetworkManager: As root , set the gateway server's IP as the nameserver: Replace IP_OF_GATEWAY_NODE and FQDN_OF_GATEWAY_NODE with the IP address and FQDN of the gateway node. Verify subdomain requests: Replace FQDN_OF_GATEWAY_NODE with the FQDN of the gateway node. Warning Setting up the gateway server for local DNS caching is for testing purposes only. You won't be able to access the outside network after doing this. It is strongly recommended to use a proper DNS server for the Red Hat Ceph Storage cluster and gateway node. Create the radosgw user for S3 access carefully as mentioned in the Object Gateway Configuration and Administration Guide and copy the generated access_key and secret_key . You will need these keys for S3 access and subsequent bucket management tasks. 3.3.6. Accessing the Ceph Object Gateway using Ruby AWS S3 You can use Ruby programming language along with aws-s3 gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::S3 . Prerequisites User-level access to Ceph Object Gateway. Root-level access to the node accessing the Ceph Object Gateway. Internet access. Procedure Install the ruby package: Note The above command will install ruby and its essential dependencies like rubygems and ruby-libs . If somehow the command does not install all the dependencies, install them separately. Install the aws-s3 Ruby package: Create a project directory: Create the connection file: Paste the following contents into the conn.rb file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when you created the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Save the file and exit the editor. Make the file executable: Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the output of the command is true it would mean that bucket my-new-bucket1 was created successfully. Create a new file for listing owned buckets: Paste the following content into the file: Save the file and exit the editor. Make the file executable: Run the file: The output should look something like this: Create a new file for creating an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will create a file hello.txt with the string Hello World! . Create a new file for listing a bucket's content: Paste the following content into the file: Save the file and exit the editor. Make the file executable. Run the file: The output will look something like this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.rb file to create empty buckets, for example, my-new-bucket4 , my-new-bucket5 . , edit the above-mentioned del_empty_bucket.rb file accordingly before trying to delete empty buckets. Create a new file for deleting non-empty buckets: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Create a new file for deleting an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will delete the object hello.txt . 3.3.7. Accessing the Ceph Object Gateway using Ruby AWS SDK You can use the Ruby programming language along with aws-sdk gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::SDK . Prerequisites User-level access to Ceph Object Gateway. Root-level access to the node accessing the Ceph Object Gateway. Internet access. Procedure Install the ruby package: Note The above command will install ruby and its essential dependencies like rubygems and ruby-libs . If somehow the command does not install all the dependencies, install them separately. Install the aws-sdk Ruby package: Create a project directory: Create the connection file: Paste the following contents into the conn.rb file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when you created the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Save the file and exit the editor. Make the file executable: Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the file: Syntax Save the file and exit the editor. Make the file executable: Run the file: If the output of the command is true , this means that bucket my-new-bucket2 was created successfully. Create a new file for listing owned buckets: Paste the following content into the file: Save the file and exit the editor. Make the file executable: Run the file: The output should look something like this: Create a new file for creating an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will create a file hello.txt with the string Hello World! . Create a new file for listing a bucket's content: Paste the following content into the file: Save the file and exit the editor. Make the file executable. Run the file: The output will look something like this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.rb file to create empty buckets, for example, my-new-bucket6 , my-new-bucket7 . , edit the above-mentioned del_empty_bucket.rb file accordingly before trying to delete empty buckets. Create a new file for deleting a non-empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Create a new file for deleting an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will delete the object hello.txt . 3.3.8. Accessing the Ceph Object Gateway using PHP You can use PHP scripts for S3 access. This procedure provides some example PHP scripts to do various tasks, such as deleting a bucket or an object. Important The examples given below are tested against php v5.4.16 and aws-sdk v2.8.24 . DO NOT use the latest version of aws-sdk for php as it requires php >= 5.5+ . php 5.5 is not available in the default repositories of RHEL 7 . If you want to use php 5.5 , you will have to enable epel and other third-party repositories. Also, the configuration options for php 5.5 and latest version of aws-sdk are different. Prerequisites Root-level access to a development workstation. Internet access. Procedure Install the php package: Download the zip archive of aws-sdk for PHP and extract it. Create a project directory: Copy the extracted aws directory to the project directory. For example: Create the connection file: Paste the following contents in the conn.php file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when creating the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Replace PATH_TO_AWS with the absolute path to the extracted aws directory that you copied to the php project directory. Save the file and exit the editor. Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the new file: Syntax Save the file and exit the editor. Run the file: Create a new file for listing owned buckets: Paste the following content into the file: Syntax Save the file and exit the editor. Run the file: The output should look similar to this: Create an object by first creating a source file named hello.txt : Create a new php file: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: This will create the object hello.txt in bucket my-new-bucket3 . Create a new file for listing a bucket's content: Paste the following content into the file: Syntax Save the file and exit the editor. Run the file: The output will look similar to this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.php file to create empty buckets, for example, my-new-bucket4 , my-new-bucket5 . , edit the above-mentioned del_empty_bucket.php file accordingly before trying to delete empty buckets. Important Deleting a non-empty bucket is currently not supported in PHP 2 and newer versions of aws-sdk . Create a new file for deleting an object: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: This will delete the object hello.txt . 3.3.9. Secure Token Service The Amazon Web Services' Secure Token Service (STS) returns a set of temporary security credentials for authenticating users. The Ceph Object Gateway implements a subset of the STS application programming interfaces (APIs) to provide temporary credentials for identity and access management (IAM). Using these temporary credentials authenticates S3 calls by utilizing the STS engine in the Ceph Object Gateway. You can restrict temporary credentials even further by using an IAM policy, which is a parameter passed to the STS APIs. Additional Resources Amazon Web Services Secure Token Service welcome page . See the Configuring and using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on STS Lite and Keystone. See the Working around the limitations of using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on the limitations of STS Lite and Keystone. 3.3.9.1. The Secure Token Service application programming interfaces The Ceph Object Gateway implements the following Secure Token Service (STS) application programming interfaces (APIs): AssumeRole This API returns a set of temporary credentials for cross-account access. These temporary credentials allow for both, permission policies attached with Role and policies attached with AssumeRole API. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional. RoleArn Description The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters. Type String Required Yes RoleSessionName Description Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter's value has a length of 2 to 64 characters. The = , , , . , @ , and - characters are allowed, but no spaces allowed. Type String Required Yes Policy Description An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter's value has a length of 1 to 2048 characters. Type String Required No DurationSeconds Description The duration of the session in seconds, with a minimum value of 900 seconds to a maximum value of 43200 seconds. The default value is 3600 seconds. Type Integer Required No ExternalId Description When assuming a role for another account, provide the unique external identifier if available. This parameter's value has a length of 2 to 1224 characters. Type String Required No SerialNumber Description A user's identification number from their associated multi-factor authentication (MFA) device. The parameter's value can be the serial number of a hardware device or a virtual device, with a length of 9 to 256 characters. Type String Required No TokenCode Description The value generated from the multi-factor authentication (MFA) device, if the trust policy requires MFA. If an MFA device is required, and if this parameter's value is empty or expired, then AssumeRole call returns an "access denied" error message. This parameter's value has a fixed length of 6 characters. Type String Required No AssumeRoleWithWebIdentity This API returns a set of temporary credentials for users who have been authenticated by an application, such as OpenID Connect or OAuth 2.0 Identity Provider. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional. RoleArn Description The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters. Type String Required Yes RoleSessionName Description Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter's value has a length of 2 to 64 characters. The = , , , . , @ , and - characters are allowed, but no spaces are allowed. Type String Required Yes Policy Description An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter's value has a length of 1 to 2048 characters. Type String Required No DurationSeconds Description The duration of the session in seconds, with a minimum value of 900 seconds to a maximum value of 43200 seconds. The default value is 3600 seconds. Type Integer Required No ProviderId Description The fully qualified host component of the domain name from the identity provider. This parameter's value is only valid for OAuth 2.0 access tokens, with a length of 4 to 2048 characters. Type String Required No WebIdentityToken Description The OpenID Connect identity token or OAuth 2.0 access token provided from an identity provider. This parameter's value has a length of 4 to 2048 characters. Type String Required No Additional Resources See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. Amazon Web Services Security Token Service, the AssumeRole action. Amazon Web Services Security Token Service, the AssumeRoleWithWebIdentity action. 3.3.9.2. Configuring the Secure Token Service Configure the Secure Token Service (STS) for use with the Ceph Object Gateway by setting the rgw_sts_key , and rgw_s3_auth_use_sts options. Note The S3 and STS APIs co-exist in the same namespace, and both can be accessed from the same endpoint in the Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. Root-level access to a Ceph Manager node. Procedure Set the following configuration options for the Ceph Object Gateway client: Syntax The rgw_sts_key is the STS key for encrypting or decrypting the session token and is exactly 16 hex characters. Important The STS key needs to be alphanumeric. Example Restart the Ceph Object Gateway for the added key to take effect. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Additional Resources See Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the The basics of Ceph configuration chapter in the Red Hat Ceph Storage Configuration Guide for more details on using the Ceph configuration database. 3.3.9.3. Creating a user for an OpenID Connect provider To establish trust between the Ceph Object Gateway and the OpenID Connect Provider create a user entity and a role trust policy. Prerequisites User-level access to the Ceph Object Gateway node. Procedure Create a new Ceph user: Syntax Example Configure the Ceph user capabilities: Syntax Example Add a condition to the role trust policy using the Secure Token Service (STS) API: Syntax Important The app_id in the syntax example above must match the AUD_FIELD field of the incoming token. Additional Resources See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon's website. See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. 3.3.9.4. Obtaining a thumbprint of an OpenID Connect provider To get the OpenID Connect provider's (IDP) configuration document. Prerequisites Installation of the openssl and curl packages. Procedure Get the configuration document from the IDP's URL: Syntax Example Get the IDP certificate: Syntax Example Copy the result of the "x5c" response from the command and paste it into the certificate.crt file. Include --BEGIN CERTIFICATE-- at the beginning and --END CERTIFICATE-- at the end. Get the certificate thumbprint: Syntax Example Remove all the colons from the SHA1 fingerprint and use this as the input for creating the IDP entity in the IAM request. Additional Resources See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon's website. See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. 3.3.9.5. Configuring and using STS Lite with Keystone (Technology Preview) The Amazon Secure Token Service (STS) and S3 APIs co-exist in the same namespace. The STS options can be configured in conjunction with the Keystone options. Note Both S3 and STS APIs can be accessed using the same endpoint in Ceph Object Gateway. Prerequisites Red Hat Ceph Storage 5.0 or higher. A running Ceph Object Gateway. Installation of the Boto Python module, version 3 or higher. Root-level access to a Ceph Manager node. User-level access to an OpenStack node. Procedure Set the following configuration options for the Ceph Object Gateway client: Syntax The rgw_sts_key is the STS key for encrypting or decrypting the session token and is exactly 16 hex characters. Important The STS key needs to be alphanumeric. Example Generate the EC2 credentials on the OpenStack node: Example Use the generated credentials to get back a set of temporary security credentials using GetSessionToken API: Example Obtaining the temporary credentials can be used for making S3 calls: Example Create a new S3Access role and configure a policy. Assign a user with administrative CAPS: Syntax Example Create the S3Access role: Syntax Example Attach a permission policy to the S3Access role: Syntax Example Now another user can assume the role of the gwadmin user. For example, the gwuser user can assume the permissions of the gwadmin user. Make a note of the assuming user's access_key and secret_key values. Example Use the AssumeRole API call, providing the access_key and secret_key values from the assuming user: Example Important The AssumeRole API requires the S3Access role. Additional Resources See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module. See the Create a User section in the Red Hat Ceph Storage Object Gateway Guide for more information. 3.3.9.6. Working around the limitations of using STS Lite with Keystone (Technology Preview) A limitation with Keystone is that it does not supports Secure Token Service (STS) requests. Another limitation is the payload hash is not included with the request. To work around these two limitations the Boto authentication code must be modified. Prerequisites A running Red Hat Ceph Storage cluster, version 5.0 or higher. A running Ceph Object Gateway. Installation of Boto Python module, version 3 or higher. Procedure Open and edit Boto's auth.py file. Add the following four lines to the code block: class SigV4Auth(BaseSigner): """ Sign a request with Signature V4. """ REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4 Add the following two lines to the code block: def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request) Additional Resources See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module. 3.4. S3 bucket operations As a developer, you can perform bucket operations with the Amazon S3 application programming interface (API) through the Ceph Object Gateway. The following table list the Amazon S3 functional operations for buckets, along with the function's support status. Table 3.2. Bucket operations Feature Status Notes List Buckets Supported Create a Bucket Supported Different set of canned ACLs. Put Bucket Website Supported Get Bucket Website Supported Delete Bucket Website Supported Bucket Lifecycle Partially Supported Expiration , NoncurrentVersionExpiration and AbortIncompleteMultipartUpload supported. Put Bucket Lifecycle Partially Supported Expiration , NoncurrentVersionExpiration and AbortIncompleteMultipartUpload supported. Delete Bucket Lifecycle Supported Get Bucket Objects Supported Bucket Location Supported Get Bucket Version Supported Put Bucket Version Supported Delete Bucket Supported Get Bucket ACLs Supported Different set of canned ACLs Put Bucket ACLs Supported Different set of canned ACLs Get Bucket cors Supported Put Bucket cors Supported Delete Bucket cors Supported List Bucket Object Versions Supported Head Bucket Supported List Bucket Multipart Uploads Supported Bucket Policies Partially Supported Get a Bucket Request Payment Supported Put a Bucket Request Payment Supported Multi-tenant Bucket Operations Supported Get PublicAccessBlock Supported Put PublicAccessBlock Supported Delete PublicAccessBlock Supported 3.4.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.4.2. S3 create bucket notifications Create bucket notifications at the bucket level. The notification configuration has the Red Hat Ceph Storage Object Gateway S3 events, ObjectCreated and ObjectRemoved . These need to be published and the destination to send the bucket notifications. Bucket notifications are S3 operations. To create a bucket notification for s3:objectCreate and s3:objectRemove events, use PUT: Example Important Red Hat supports ObjectCreate events, such as put , post , multipartUpload , and copy . Red Hat also supports ObjectRemove events, such as object_delete and s3_multi_object_delete . Request Entities NotificationConfiguration Description list of TopicConfiguration entities. Type Container Required Yes TopicConfiguration Description Id , Topic , and list of Event entities. Type Container Required Yes id Description Name of the notification. Type String Required Yes Topic Description Topic Amazon Resource Name(ARN) Note The topic must be created beforehand. Type String Required Yes Event Description List of supported events. Multiple event entities can be used. If omitted, all events are handled. Type String Required No Filter Description S3Key , S3Metadata and S3Tags entities. Type Container Required No S3Key Description A list of FilterRule entities, for filtering based on the object key. At most, 3 entities may be in the list, for example Name would be prefix , suffix , or regex . All filter rules in the list must match for the filter to match. Type Container Required No S3Metadata Description A list of FilterRule entities, for filtering based on object metadata. All filter rules in the list must match the metadata defined on the object. However, the object still matches if it has other metadata entries not listed in the filter. Type Container Required No S3Tags Description A list of FilterRule entities, for filtering based on object tags. All filter rules in the list must match the tags defined on the object. However, the object still matches if it has other tags not listed in the filter. Type Container Required No S3Key.FilterRule Description Name and Value entities. Name is : prefix , suffix , or regex . The Value would hold the key prefix, key suffix, or a regular expression for matching the key, accordingly. Type Container Required Yes S3Metadata.FilterRule Description Name and Value entities. Name is the name of the metadata attribute for example x-amz-meta-xxx . The value is the expected value for this attribute. Type Container Required Yes S3Tags.FilterRule Description Name and Value entities. Name is the tag key, and the value is the tag value. Type Container Required Yes HTTP response 400 Status Code MalformedXML Description The XML is not well-formed. 400 Status Code InvalidArgument Description Missing Id or missing or invalid topic ARN or invalid event. 404 Status Code NoSuchBucket Description The bucket does not exist. 404 Status Code NoSuchKey Description The topic does not exist. 3.4.3. S3 get bucket notifications Get a specific notification or list all the notifications configured on a bucket. Syntax Example Example Response Note The notification subresource returns the bucket notification configuration or an empty NotificationConfiguration element. The caller must be the bucket owner. Request Entities notification-id Description Name of the notification. All notifications are listed if the ID is not provided. Type String NotificationConfiguration Description list of TopicConfiguration entities. Type Container Required Yes TopicConfiguration Description Id , Topic , and list of Event entities. Type Container Required Yes id Description Name of the notification. Type String Required Yes Topic Description Topic Amazon Resource Name(ARN) Note The topic must be created beforehand. Type String Required Yes Event Description Handled event. Multiple event entities may exist. Type String Required Yes Filter Description The filters for the specified configuration. Type Container Required No HTTP response 404 Status Code NoSuchBucket Description The bucket does not exist. 404 Status Code NoSuchKey Description The notification does not exist if it has been provided. 3.4.4. S3 delete bucket notifications Delete a specific or all notifications from a bucket. Note Notification deletion is an extension to the S3 notification API. Any defined notifications on a bucket are deleted when the bucket is deleted. Deleting an unknown notification for example double delete , is not considered an error. To delete a specific or all notifications use DELETE: Syntax Example Request Entities notification-id Description Name of the notification. All notifications on the bucket are deleted if the notification ID is not provided. Type String HTTP response 404 Status Code NoSuchBucket Description The bucket does not exist. 3.4.5. Accessing bucket host names There are two different modes of accessing the buckets. The first, and preferred method identifies the bucket as the top-level directory in the URI. Example The second method identifies the bucket via a virtual bucket host name. Example Tip Red Hat prefers the first method, because the second method requires expensive domain certification and DNS wild cards. 3.4.6. S3 list buckets GET / returns a list of buckets created by the user making the request. GET / only returns buckets created by an authenticated user. You cannot make an anonymous request. Syntax Response Entities Buckets Description Container for list of buckets. Type Container Bucket Description Container for bucket information. Type Container Name Description Bucket name. Type String CreationDate Description UTC time when the bucket was created. Type Date ListAllMyBucketsResult Description A container for the result. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String 3.4.7. S3 return a list of bucket objects Returns a list of bucket objects. Syntax Parameters prefix Description Only returns objects that contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String marker Description A beginning index for the list of objects returned. Type String max-keys Description The maximum number of keys to return. Default is 1000. Type Integer HTTP Response 200 Status Code OK Description Buckets retrieved. GET / BUCKET returns a container for buckets with the following fields: Bucket Response Entities ListBucketResult Description The container for the list of objects. Type Entity Name Description The name of the bucket whose contents will be returned. Type String Prefix Description A prefix for the object keys. Type String Marker Description A beginning index for the list of objects returned. Type String MaxKeys Description The maximum number of keys returned. Type Integer Delimiter Description If set, objects with the same prefix will appear in the CommonPrefixes list. Type String IsTruncated Description If true , only a subset of the bucket's contents were returned. Type Boolean CommonPrefixes Description If multiple objects contain the same prefix, they will appear in this list. Type Container The ListBucketResult contains objects, where each object is within a Contents container. Object Response Entities Contents Description A container for the object. Type Object Key Description The object's key. Type String LastModified Description The object's last-modified date and time. Type Date ETag Description An MD-5 hash of the object. Etag is an entity tag. Type String Size Description The object's size. Type Integer StorageClass Description Should always return STANDARD . Type String 3.4.8. S3 create a new bucket Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You can not create buckets as an anonymous user. Constraints In general, bucket names should follow domain name constraints. Bucket names must be unique. Bucket names cannot be formatted as IP address. Bucket names can be between 3 and 63 characters long. Bucket names must not contain uppercase characters or underscores. Bucket names must start with a lowercase letter or number. Bucket names can contain a dash (-). Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number. Note The above constraints are relaxed if rgw_relaxed_s3_bucket_names is set to true . The bucket names must still be unique, cannot be formatted as IP address, and can contain letters, numbers, periods, dashes, and underscores of up to 255 characters long. Syntax Parameters x-amz-acl Description Canned ACLs. Valid Values private , public-read , public-read-write , authenticated-read Required No HTTP Response If the bucket name is unique, within constraints, and unused, the operation will succeed. If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed. If the bucket name is already in use, the operation will fail. 409 Status Code BucketAlreadyExists Description Bucket already exists under different user's ownership. 3.4.9. S3 put bucket website The put bucket website API sets the configuration of the website that is specified in the website subresource. To configure a bucket as a website, the website subresource can be added on the bucket. Note Put operation requires S3:PutBucketWebsite permission. By default, only the bucket owner can configure the website attached to a bucket. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.4.10. S3 get bucket website The get bucket website API retrieves the configuration of the website that is specified in the website subresource. Note Get operation requires the S3:GetBucketWebsite permission. By default, only the bucket owner can read the bucket website configuration. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.4.11. S3 delete bucket website The delete bucket website API removes the website configuration for a bucket. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.4.12. S3 delete a bucket Deletes a bucket. You can reuse bucket names following a successful bucket removal. Syntax HTTP Response 204 Status Code No Content Description Bucket removed. 3.4.13. S3 bucket lifecycle You can use a bucket lifecycle configuration to manage your objects so they are stored effectively throughout their lifetime. The S3 API in the Ceph Object Gateway supports a subset of the AWS bucket lifecycle actions: Expiration : This defines the lifespan of objects within a bucket. It takes the number of days the object should live or expiration date, at which point Ceph Object Gateway will delete the object. If the bucket doesn't enable versioning, Ceph Object Gateway will delete the object permanently. If the bucket enables versioning, Ceph Object Gateway will create a delete marker for the current version, and then delete the current version. NoncurrentVersionExpiration : This defines the lifespan of non-current object versions within a bucket. To use this feature, the bucket must enable versioning. It takes the number of days a non-current object should live, at which point Ceph Object Gateway will delete the non-current object. AbortIncompleteMultipartUpload : This defines the number of days an incomplete multipart upload should live before it is aborted. The lifecycle configuration contains one or more rules using the <Rule> element. Example A lifecycle rule can apply to all or a subset of objects in a bucket based on the <Filter> element that you specify in the lifecycle rule. You can specify a filter in several ways: Key prefixes Object tags Both key prefix and one or more object tags Key prefixes You can apply a lifecycle rule to a subset of objects based on the key name prefix. For example, specifying <keypre/> would apply to objects that begin with keypre/ : You can also apply different lifecycle rules to objects with different key prefixes: Object tags You can apply a lifecycle rule to only objects with a specific tag using the <Key> and <Value> elements: Both prefix and one or more tags In a lifecycle rule, you can specify a filter based on both the key prefix and one or more tags. They must be wrapped in the <And> element. A filter can have only one prefix, and zero or more tags: Additional Resources See the S3 GET bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details on getting a bucket lifecycle. See the S3 create or replace a bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details on creating a bucket lifecycle. See the S3 delete a bucket lifecycle secton in the Red Hat Ceph Storage Developer Guide for details on deleting a bucket lifecycle. 3.4.14. S3 GET bucket lifecycle To get a bucket lifecycle, use GET and specify a destination bucket. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response contains the bucket lifecycle and its elements. 3.4.15. S3 create or replace a bucket lifecycle To create or replace a bucket lifecycle, use PUT and specify a destination bucket and a lifecycle configuration. The Ceph Object Gateway only supports a subset of the S3 lifecycle functionality. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message Valid Values String No defaults or constraints. Required No Additional Resources See the S3 common request headers section in Appendix B of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common request headers. See the S3 bucket lifecycles section of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 bucket lifecycles. 3.4.16. S3 delete a bucket lifecycle To delete a bucket lifecycle, use DELETE and specify a destination bucket. Syntax Request Headers The request does not contain any special elements. Response The response returns common response status. Additional Resources See the S3 common request headers section in Appendix B of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common request headers. See the S3 common response status codes section in Appendix C of Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common response status codes. 3.4.17. S3 get bucket location Retrieves the bucket's zone group. The user needs to be the bucket owner to call this. A bucket can be constrained to a zone group by providing LocationConstraint during a PUT request. Add the location subresource to the bucket resource as shown below. Syntax Response Entities LocationConstraint Description The zone group where bucket resides, an empty string for default zone group. Type String 3.4.18. S3 get bucket versioning Retrieves the versioning state of a bucket. The user needs to be the bucket owner to call this. Add the versioning subresource to the bucket resource as shown below. Syntax 3.4.19. S3 put bucket versioning This subresource set the versioning state of an existing bucket. The user needs to be the bucket owner to set the versioning state. If the versioning state has never been set on a bucket, then it has no versioning state. Doing a GET versioning request does not return a versioning state value. Setting the bucket versioning state: Enabled : Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID. Suspended : Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null. Syntax Example Bucket Request Entities VersioningConfiguration Description A container for the request. Type Container Status Description Sets the versioning state of the bucket. Valid Values: Suspended/Enabled Type String 3.4.20. S3 get bucket access control lists Retrieves the bucket access control list. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the acl subresource to the bucket request as shown below. Syntax Response Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.4.21. S3 put bucket Access Control Lists Sets an access control to an existing bucket. The user needs to be the bucket owner or to have been granted WRITE_ACP permission on the bucket. Add the acl subresource to the bucket request as shown below. Syntax Request Entities S3 list multipart uploads AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.4.22. S3 get bucket cors Retrieves the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.4.23. S3 put bucket cors Sets the cors configuration for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.4.24. S3 delete a bucket cors Deletes the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.4.25. S3 list bucket object versions Returns a list of metadata about all the version of objects within a bucket. Requires READ access to the bucket. Add the versions subresource to the bucket request as shown below. Syntax You can specify parameters for GET / BUCKET ?versions , but none of them are required. Parameters prefix Description Returns in-progress uploads whose keys contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String key-marker Description The beginning marker for the list of uploads. Type String max-keys Description The maximum number of in-progress uploads. The default is 1000. Type Integer version-id-marker Description Specifies the object version to begin the list. Type String Response Entities KeyMarker Description The key marker specified by the key-marker request parameter, if any. Type String NextKeyMarker Description The key marker to use in a subsequent request if IsTruncated is true . Type String NextUploadIdMarker Description The upload ID marker to use in a subsequent request if IsTruncated is true . Type String IsTruncated Description If true , only a subset of the bucket's upload contents were returned. Type Boolean Size Description The size of the uploaded part. Type Integer DisplayName Description The owner's display name. Type String ID Description The owner's ID. Type String Owner Description A container for the ID and DisplayName of the user who owns the object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String Version Description Container for the version information. Type Container versionId Description Version ID of an object. Type String versionIdMarker Description The last version of the key in a truncated response. Type String 3.4.26. S3 head bucket Calls HEAD on a bucket to determine if it exists and if the caller has access permissions. Returns 200 OK if the bucket exists and the caller has permissions; 404 Not Found if the bucket does not exist; and, 403 Forbidden if the bucket exists but the caller does not have access permissions. Syntax 3.4.27. S3 list multipart uploads GET /?uploads returns a list of the current in-progress multipart uploads, that is, the application initiates a multipart upload, but the service hasn't completed all the uploads yet. Syntax You can specify parameters for GET / BUCKET ?uploads , but none of them are required. Parameters prefix Description Returns in-progress uploads whose keys contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String key-marker Description The beginning marker for the list of uploads. Type String max-keys Description The maximum number of in-progress uploads. The default is 1000. Type Integer max-uploads Description The maximum number of multipart uploads. The range is from 1-1000. The default is 1000. Type Integer version-id-marker Description Ignored if key-marker isn't specified. Specifies the ID of the first upload to list in lexicographical order at or following the ID . Type String Response Entities ListMultipartUploadsResult Description A container for the results. Type Container ListMultipartUploadsResult.Prefix Description The prefix specified by the prefix request parameter, if any. Type String Bucket Description The bucket that will receive the bucket contents. Type String KeyMarker Description The key marker specified by the key-marker request parameter, if any. Type String UploadIdMarker Description The marker specified by the upload-id-marker request parameter, if any. Type String NextKeyMarker Description The key marker to use in a subsequent request if IsTruncated is true . Type String NextUploadIdMarker Description The upload ID marker to use in a subsequent request if IsTruncated is true . Type String MaxUploads Description The max uploads specified by the max-uploads request parameter. Type Integer Delimiter Description If set, objects with the same prefix will appear in the CommonPrefixes list. Type String IsTruncated Description If true , only a subset of the bucket's upload contents were returned. Type Boolean Upload Description A container for Key , UploadId , InitiatorOwner , StorageClass , and Initiated elements. Type Container Key Description The key of the object once the multipart upload is complete. Type String UploadId Description The ID that identifies the multipart upload. Type String Initiator Description Contains the ID and DisplayName of the user who initiated the upload. Type Container DisplayName Description The initiator's display name. Type String ID Description The initiator's ID. Type String Owner Description A container for the ID and DisplayName of the user who owns the uploaded object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String Initiated Description The date and time the user initiated the upload. Type Date CommonPrefixes Description If multiple objects contain the same prefix, they will appear in this list. Type Container CommonPrefixes.Prefix Description The substring of the key after the prefix as defined by the prefix request parameter. Type String 3.4.28. S3 bucket policies The Ceph Object Gateway supports a subset of the Amazon S3 policy language applied to buckets. Creation and Removal Ceph Object Gateway manages S3 Bucket policies through standard S3 operations rather than using the radosgw-admin CLI tool. Administrators may use the s3cmd command to set or delete a policy. Example Limitations Ceph Object Gateway only supports the following S3 actions: s3:AbortMultipartUpload s3:CreateBucket s3:DeleteBucketPolicy s3:DeleteBucket s3:DeleteBucketWebsite s3:DeleteObject s3:DeleteObjectVersion s3:GetBucketAcl s3:GetBucketCORS s3:GetBucketLocation s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketVersioning s3:GetBucketWebsite s3:GetLifecycleConfiguration s3:GetObjectAcl s3:GetObject s3:GetObjectTorrent s3:GetObjectVersionAcl s3:GetObjectVersion s3:GetObjectVersionTorrent s3:ListAllMyBuckets s3:ListBucketMultiPartUploads s3:ListBucket s3:ListBucketVersions s3:ListMultipartUploadParts s3:PutBucketAcl s3:PutBucketCORS s3:PutBucketPolicy s3:PutBucketRequestPayment s3:PutBucketVersioning s3:PutBucketWebsite s3:PutLifecycleConfiguration s3:PutObjectAcl s3:PutObject s3:PutObjectVersionAcl Note Ceph Object Gateway does not support setting policies on users, groups, or roles. The Ceph Object Gateway uses the RGW 'tenant' identifier in place of the Amazon twelve-digit account ID. Ceph Object Gateway administrators who want to use policies between Amazon Web Service (AWS) S3 and Ceph Object Gateway S3 will have to use the Amazon account ID as the tenant ID when creating users. With AWS S3, all tenants share a single namespace. By contrast, Ceph Object Gateway gives every tenant its own namespace of buckets. At present, Ceph Object Gateway clients trying to access a bucket belonging to another tenant MUST address it as tenant:bucket in the S3 request. In the AWS, a bucket policy can grant access to another account, and that account owner can then grant access to individual users with user permissions. Since Ceph Object Gateway does not yet support user, role, and group permissions, account owners will need to grant access directly to individual users. Important Granting an entire account access to a bucket grants access to ALL users in that account. Bucket policies do NOT support string interpolation. Ceph Object Gateway supports the following condition keys: aws:CurrentTime aws:EpochTime aws:PrincipalType aws:Referer aws:SecureTransport aws:SourceIp aws:UserAgent aws:username Ceph Object Gateway ONLY supports the following condition keys for the ListBucket action: s3:prefix s3:delimiter s3:max-keys Impact on Swift Ceph Object Gateway provides no functionality to set bucket policies under the Swift API. However, bucket policies that have been set with the S3 API govern Swift as well as S3 operations. Ceph Object Gateway matches Swift credentials against Principals specified in a policy. 3.4.29. S3 get the request payment configuration on a bucket Uses the requestPayment subresource to return the request payment configuration of a bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the requestPayment subresource to the bucket request as shown below. Syntax 3.4.30. S3 set the request payment configuration on a bucket Uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner to specify that the person requesting the download will be charged for the request and the data download from the bucket. Add the requestPayment subresource to the bucket request as shown below. Syntax Request Entities Payer Description Specifies who pays for the download and request fees. Type Enum RequestPaymentConfiguration Description A container for Payer . Type Container 3.4.31. Multi-tenant bucket operations When a client application accesses buckets, it always operates with the credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every bucket operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi-tenancy is completely backward compatible with releases, as long as the referred buckets and referring user belong to the same tenant. Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used. In the following example, a colon character separates tenant and bucket. Thus a sample URL would be: By contrast, a simple Python example separates the tenant and bucket in the bucket method itself: Example Note It's not possible to use S3-style subdomains using multi-tenancy, since host names cannot contain colons or any other separators that are not already valid in bucket names. Using a period creates an ambiguous syntax. Therefore, the bucket-in-URL-path format has to be used with multi-tenancy. Additional Resources See the Multi Tenancy section under User Management in the Red Hat Ceph Storage Object Gateway Guide for additional details. 3.4.32. S3 Block Public Access You can use the S3 Block Public Access feature to set access points, buckets, and accounts to help you manage public access to Amazon S3 resources. Using this feature, bucket policies, access point policies, and object permissions can be overridden to allow public access. By default, new buckets, access points, and objects do not allow public access. The S3 API in the Ceph Object Gateway supports a subset of the AWS public access settings: BlockPublicPolicy : This defines the setting to allow users to manage access point and bucket policies. This setting does not allow the users to publicly share the bucket or the objects it contains. Existing access point and bucket policies are not affected by enabling this setting. Setting this option to TRUE causes the S3: To reject calls to PUT Bucket policy. To reject calls to PUT access point policy for all of the bucket's same-account access points. Important Apply this setting at the account level so that users cannot alter a specific bucket's block public access setting. Note The TRUE setting only works if the specified policy allows public access. RestrictPublicBuckets : This defines the setting to restrict access to a bucket or access point with public policy. The restriction applies to only AWS service principals and authorized users within the bucket owner's account and access point owner's account. This blocks cross-account access to the access point or bucket, except for the cases specified, while still allowing users within the account to manage the access points or buckets. Enabling this setting does not affect existing access point or bucket policies. It only defines that Amazon S3 blocks public and cross-account access derived from any public access point or bucket policy, including non-public delegation to specific accounts. Note Access control lists (ACLs) are not currently supported by Red Hat Ceph Storage. Bucket policies are assumed to be public unless defined otherwise. To block public access a bucket policy must give access only to fixed values for one or more of the following: Note A fixed value does not contain a wildcard ( * ) or an AWS Identity and Access Management Policy Variable. An AWS principal, user, role, or service principal A set of Classless Inter-Domain Routings (CIDRs), using aws:SourceIp aws:SourceArn aws:SourceVpc aws:SourceVpce aws:SourceOwner aws:SourceAccount s3:x-amz-server-side-encryption-aws-kms-key-id aws:userid , outside the pattern AROLEID:* s3:DataAccessPointArn Note When used in a bucket policy, this value can contain a wildcard for the access point name without rendering the policy public, as long as the account ID is fixed. s3:DataAccessPointPointAccount The following example policy is considered public. Example To make a policy non-public, include any of the condition keys with a fixed value. Example Additional Resources See the S3 GET `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on getting a PublicAccessBlock. See the S3 PUT `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on creating or modifying a PublicAccessBlock. See the S3 Delete `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on deleting a PublicAccessBlock. See the S3 bucket policies section in the Red Hat Ceph Storage Developer Guide for details on bucket policies. See the Blocking public access to your Amazon S3 storage section of Amazon Simple Storage Service (S3) documentation. 3.4.33. S3 GET PublicAccessBlock To get the S3 Block Public Access feature configured, use GET and specify a destination AWS account. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned in XML format. 3.4.34. S3 PUT PublicAccessBlock Use this to create or modify the PublicAccessBlock configuration for an S3 bucket. To use this operation, you must have the s3:PutBucketPublicAccessBlock permission. Important If the PublicAccessBlock configuration is different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned with an empty HTTP body. 3.4.35. S3 delete PublicAccessBlock Use this to delete the PublicAccessBlock configuration for an S3 bucket. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned with an empty HTTP body. 3.5. S3 object operations As a developer, you can perform object operations with the Amazon S3 application programming interface (API) through the Ceph Object Gateway. The following table list the Amazon S3 functional operations for objects, along with the function's support status. Table 3.3. Object operations Feature Status Get Object Supported Get Object Information Supported Put Object Lock Supported Get Object Lock Supported Put Object Legal Hold Supported Get Object Legal Hold Supported Put Object Retention Supported Get Object Retention Supported Put Object Tagging Supported Get Object Tagging Supported Delete Object Tagging Supported Put Object Supported Delete Object Supported Delete Multiple Objects Supported Get Object ACLs Supported Put Object ACLs Supported Copy Object Supported Post Object Supported Options Object Supported Initiate Multipart Upload Supported Add a Part to a Multipart Upload Supported List Parts of a Multipart Upload Supported Assemble Multipart Upload Supported Copy Multipart Upload Supported Abort Multipart Upload Supported Multi-Tenancy Supported 3.5.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.5.2. S3 get an object from a bucket Retrieves an object from a bucket: Syntax Add the versionId subresource to retrieve a particular version of the object: Syntax Request Headers partNumber Description Part number of the object being read. This enables a ranged GET request for the specified part. Using this request is useful for downloading just a part of an object. Valid Values A positive integer between 1 and 10,000. Required No range Description The range of the object to retrieve. Note Multiple ranges of data per GET request are not supported. Valid Values Range:bytes=beginbyte-endbyte Required No if-modified-since Description Gets only if modified since the timestamp. Valid Values Timestamp Required No if-unmodified-since Description Gets only if not modified since the timestamp. Valid Values Timestamp Required No if-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No if-none-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No Sytnax with request headers Response Headers Content-Range Description Data range, will only be returned if the range header field was specified in the request. x-amz-version-id Description Returns the version ID or null. 3.5.3. S3 get information on an object Returns information about an object. This request will return the same header information as with the Get Object request, but will include the metadata only, not the object data payload. Retrieves the current version of the object: Syntax Add the versionId subresource to retrieve info for a particular version: Syntax Request Headers range Description The range of the object to retrieve. Valid Values Range:bytes=beginbyte-endbyte Required No if-modified-since Description Gets only if modified since the timestamp. Valid Values Timestamp Required No if-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No if-none-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No Response Headers x-amz-version-id Description Returns the version ID or null. 3.5.4. S3 put object lock The put object lock API places a lock configuration on the selected bucket. With object lock, you can store objects using a write-once-read-many (WORM) model. Object lock ensures an object is not deleted or overwritten, for a fixed amount of time or indefinitely. The rule specified in the object lock configuration is applied by default to every new object placed in the selected bucket. Important Enable the object lock when creating a bucket otherwise, the operation fails. Syntax Example Request Entities ObjectLockConfiguration Description A container for the request. Type Container Required Yes ObjectLockEnabled Description Indicates whether this bucket has an object lock configuration enabled. Type String Required Yes Rule Description The object lock rule in place for the specified bucket. Type Container Required No DefaultRetention Description The default retention period applied to new objects placed in the specified bucket. Type Container Required No Mode Description The default object lock retention mode. Valid values: GOVERNANCE/COMPLIANCE. Type Container Required Yes Days Description The number of days specified for the default retention period. Type Integer Required No Years Description The number of years specified for the default retention period. Type Integer Required No HTTP Response 400 Status Code MalformedXML Description The XML is not well-formed. 409 Status Code InvalidBucketState Description The bucket object lock is not enabled. Additional Resources For more information about this API call, see S3 API . 3.5.5. S3 get object lock The get object lock API retrieves the lock configuration for a bucket. Syntax Example Response Entities ObjectLockConfiguration Description A container for the request. Type Container Required Yes ObjectLockEnabled Description Indicates whether this bucket has an object lock configuration enabled. Type String Required Yes Rule Description The object lock rule is in place for the specified bucket. Type Container Required No DefaultRetention Description The default retention period applied to new objects placed in the specified bucket. Type Container Required No Mode Description The default object lock retention mode. Valid values: GOVERNANCE/COMPLIANCE. Type Container Required Yes Days Description The number of days specified for the default retention period. Type Integer Required No Years Description The number of years specified for the default retention period. Type Integer Required No Additional Resources For more information about this API call, see S3 API . 3.5.6. S3 put object legal hold The put object legal hold API applies a legal hold configuration to the selected object. With a legal hold in place, you cannot overwrite or delete an object version. A legal hold does not have an associated retention period and remains in place until you explicitly remove it. Syntax Example The versionId subresource retrieves a particular version of the object. Request Entities LegalHold Description A container for the request. Type Container Required Yes Status Description Indicates whether the specified object has a legal hold in place. Valid values: ON/OFF Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.5.7. S3 get object legal hold The get object legal hold API retrieves an object's current legal hold status. Syntax Example The versionId subresource retrieves a particular version of the object. Response Entities LegalHold Description A container for the request. Type Container Required Yes Status Description Indicates whether the specified object has a legal hold in place. Valid values: ON/OFF Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.5.8. S3 put object retention The put object retention API places an object retention configuration on an object. A retention period protects an object version for a fixed amount of time. There are two modes: governance mode and compliance mode. These two retention modes apply different levels of protection to your objects. Note During this period, your object is write-once-read-many (WORM protected) and can not be overwritten or deleted. Syntax Example The versionId subresource retrieves a particular version of the object. Request Entities Retention Description A container for the request. Type Container Required Yes Mode Description Retention mode for the specified object. Valid values: GOVERNANCE/COMPLIANCE Type String Required Yes RetainUntilDate Description Retention date. Format: 2020-01-05T00:00:00.000Z Type Timestamp Required Yes Additional Resources For more information about this API call, see S3 API . 3.5.9. S3 get object retention The get object retention API retrieves an object retention configuration on an object. Syntax Example The versionId subresource retrieves a particular version of the object. Response Entities Retention Description A container for the request. Type Container Required Yes Mode Description Retention mode for the specified object. Valid values: GOVERNANCE/COMPLIANCE Type String Required Yes RetainUntilDate Description Retention date. Format: 2020-01-05T00:00:00.000Z Type Timestamp Required Yes Additional Resources For more information about this API call, see S3 API . 3.5.10. S3 put object tagging The put object tagging API associates tags with an object. A tag is a key-value pair. To put tags of any other version, use the versionId query parameter. You must have permission to perform the s3:PutObjectTagging action. By default, the bucket owner has this permission and can grant this permission to others. Syntax Example Request Entities Tagging Description A container for the request. Type Container Required Yes TagSet Description A collection of a set of tags. Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.5.11. S3 get object tagging The get object tagging API returns the tag of an object. By default, the GET operation returns information on the current version of an object. Note For a versioned bucket, you can have multiple versions of an object in your bucket. To retrieve tags of any other version, add the versionId query parameter in the request. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.5.12. S3 delete object tagging The delete object tagging API removes the entire tag set from the specified object. You must have permission to perform the s3:DeleteObjectTagging action, to use this operation. Note To delete tags of a specific object version, add the versionId query parameter in the request. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.5.13. S3 add an object to a bucket Adds an object to a bucket. You must have write permissions on the bucket to perform this operation. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message. Valid Values A string. No defaults or constraints. Required No content-type Description A standard MIME type. Valid Values Any MIME type. Default: binary/octet-stream . Required No x-amz-meta-<... >* Description User metadata. Stored with the object. Valid Values A string up to 8kb. No defaults. Required No x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No Response Headers x-amz-version-id Description Returns the version ID or null. 3.5.14. S3 delete an object Removes an object. Requires WRITE permission set on the containing bucket. Deletes an object. If object versioning is on, it creates a marker. Syntax To delete an object when versioning is on, you must specify the versionId subresource and the version of the object to delete. 3.5.15. S3 delete multiple objects This API call deletes multiple objects from a bucket. Syntax 3.5.16. S3 get an object's Access Control List (ACL) Returns the ACL for the current version of the object: Syntax Add the versionId subresource to retrieve the ACL for a particular version: Syntax Response Headers x-amz-version-id Description Returns the version ID or null. Response Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.5.17. S3 set an object's Access Control List (ACL) Sets an object ACL for the current version of the object. Syntax Request Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.5.18. S3 copy an object To copy an object, use PUT and specify a destination bucket and the object name. Syntax Request Headers x-amz-copy-source Description The source bucket name + object name. Valid Values BUCKET / OBJECT Required Yes x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No x-amz-copy-if-modified-since Description Copies only if modified since the timestamp. Valid Values Timestamp Required No x-amz-copy-if-unmodified-since Description Copies only if unmodified since the timestamp. Valid Values Timestamp Required No x-amz-copy-if-match Description Copies only if object ETag matches ETag. Valid Values Entity Tag Required No x-amz-copy-if-none-match Description Copies only if object ETag matches ETag. Valid Values Entity Tag Required No Response Entities CopyObjectResult Description A container for the response elements. Type Container LastModified Description The last modified date of the source object. Type Date Etag Description The ETag of the new object. Type String 3.5.19. S3 add an object to a bucket using HTML forms Adds an object to a bucket using HTML forms. You must have write permissions on the bucket to perform this operation. Syntax 3.5.20. S3 determine options for a request A preflight request to determine if an actual request can be sent with the specific origin, HTTP method, and headers. Syntax 3.5.21. S3 initiate a multipart upload Initiates a multi-part upload process. Returns a UploadId , which you can specify when adding additional parts, listing parts, and completing or abandoning a multi-part upload. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message. Valid Values A string. No defaults or constraints. Required No content-type Description A standard MIME type. Valid Values Any MIME type. Default: binary/octet-stream Required No x-amz-meta-<... > Description User metadata. Stored with the object. Valid Values A string up to 8kb. No defaults. Required No x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No Response Entities InitiatedMultipartUploadsResult Description A container for the results. Type Container Bucket Description The bucket that will receive the object contents. Type String Key Description The key specified by the key request parameter, if any. Type String UploadId Description The ID specified by the upload-id request parameter identifying the multipart upload, if any. Type String 3.5.22. S3 add a part to a multipart upload Adds a part to a multi-part upload. Specify the uploadId subresource and the upload ID to add a part to a multi-part upload: Syntax The following HTTP response might be returned: HTTP Response 404 Status Code NoSuchUpload Description Specified upload-id does not match any initiated upload on this object. 3.5.23. S3 list the parts of a multipart upload Specify the uploadId subresource and the upload ID to list the parts of a multi-part upload: Syntax Response Entities InitiatedMultipartUploadsResult Description A container for the results. Type Container Bucket Description The bucket that will receive the object contents. Type String Key Description The key specified by the key request parameter, if any. Type String UploadId Description The ID specified by the upload-id request parameter identifying the multipart upload, if any. Type String Initiator Description Contains the ID and DisplayName of the user who initiated the upload. Type Container ID Description The initiator's ID. Type String DisplayName Description The initiator's display name. Type String Owner Description A container for the ID and DisplayName of the user who owns the uploaded object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String PartNumberMarker Description The part marker to use in a subsequent request if IsTruncated is true . Precedes the list. Type String NextPartNumberMarker Description The part marker to use in a subsequent request if IsTruncated is true . The end of the list. Type String IsTruncated Description If true , only a subset of the object's upload contents were returned. Type Boolean Part Description A container for Key , Part , InitiatorOwner , StorageClass , and Initiated elements. Type Container PartNumber Description A container for Key , Part , InitiatorOwner , StorageClass , and Initiated elements. Type Integer ETag Description The part's entity tag. Type String Size Description The size of the uploaded part. Type Integer 3.5.24. S3 assemble the uploaded parts Assembles uploaded parts and creates a new object, thereby completing a multipart upload. Specify the uploadId subresource and the upload ID to complete a multi-part upload: Syntax Request Entities CompleteMultipartUpload Description A container consisting of one or more parts. Type Container Required Yes Part Description A container for the PartNumber and ETag . Type Container Required Yes PartNumber Description The identifier of the part. Type Integer Required Yes ETag Description The part's entity tag. Type String Required Yes Response Entities CompleteMultipartUploadResult Description A container for the response. Type Container Location Description The resource identifier (path) of the new object. Type URI bucket Description The name of the bucket that contains the new object. Type String Key Description The object's key. Type String ETag Description The entity tag of the new object. Type String 3.5.25. S3 copy a multipart upload Uploads a part by copying data from an existing object as data source. Specify the uploadId subresource and the upload ID to perform a multi-part upload copy: Syntax Request Headers x-amz-copy-source Description The source bucket name and object name. Valid Values BUCKET / OBJECT Required Yes x-amz-copy-source-range Description The range of bytes to copy from the source object. Valid Values Range: bytes=first-last , where the first and last are the zero-based byte offsets to copy. For example, bytes=0-9 indicates that you want to copy the first ten bytes of the source. Required No Response Entities CopyPartResult Description A container for all response elements. Type Container ETag Description Returns the ETag of the new part. Type String LastModified Description Returns the date the part was last modified. Type String Additional Resources For more information about this feature, see the Amazon S3 site . 3.5.26. S3 abort a multipart upload Aborts a multipart upload. Specify the uploadId subresource and the upload ID to abort a multi-part upload: Syntax 3.5.27. S3 Hadoop interoperability For data analytics applications that require Hadoop Distributed File System (HDFS) access, the Ceph Object Gateway can be accessed using the Apache S3A connector for Hadoop. The S3A connector is an open-source tool that presents S3 compatible object storage as an HDFS file system with HDFS file system read and write semantics to the applications while data is stored in the Ceph Object Gateway. Ceph Object Gateway is fully compatible with the S3A connector that ships with Hadoop 2.7.3. 3.5.28. Additional Resources See the Red Hat Ceph Storage Object Gateway Guide for details on multi-tenancy. 3.6. S3 select operations (Technology Preview) As a developer, you can use the S3 select API for high-level analytic applications like Spark-SQL to improve latency and throughput. For example a CSV S3 object with several gigabytes of data, the user can extract a single column which is filtered by another column using the following query: Example Currently, the S3 object must retrieve data from the Ceph OSD through the Ceph Object Gateway before filtering and extracting data. There is improved performance when the object is large and the query is more specific. 3.6.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. A S3 user created with user access. 3.6.2. S3 select content from an object The select object content API filters the content of an object through the structured query language (SQL). In the request, you must specify the data serialization format as, comma-separated values (CSV) of the object to retrieve the specified content. Amazon Web Services(AWS) command-line interface(CLI) select object content uses the CSV format to parse object data into records and returns only the records specified in the query. Note You must specify the data serialization format for the response. You must have s3:GetObject permission for this operation. Syntax Example Request entities Bucket Description The bucket to select object content from. Type String Required Yes Key Description The object key. Length Constraints Minimum length of 1. Type String Required Yes SelectObjectContentRequest Description Root level tag for the select object content request parameters. Type String Required Yes Expression Description The expression that is used to query the object. Type String Required Yes ExpressionType Description The type of the provided expression for example SQL. Type String Valid Values SQL Required Yes InputSerialization Description Describes the format of the data in the object that is being queried. Type String Required Yes OutputSerialization Description Format of data returned in comma separator and new-line. Type String Required Yes Response entities If the action is successful, the service sends back HTTP 200 response. Data is returned in XML format by the service: Payload Description Root level tag for the payload parameters. Type String Required Yes Records Description The records event. Type Base64-encoded binary data object Required No Stats Description The stats event. Type Long Required No The Ceph Object Gateway supports the following response: Example Syntax Example Supported features Currently, only part of the AWS s3 select command is supported: Features Details Description Example Arithmetic operators ^ * % / + - ( ) select (int(_1)+int(_2))*int(_9) from stdin; Arithmetic operators % modulo select count(*) from stdin where cast(_1 as int)%2 == 0; Arithmetic operators ^ power-of select cast(2^10 as int) from stdin; Compare operators > < >= ⇐ == != select _1,_2 from stdin where (int(_1)+int(_3))>int(_5); logical operator AND or NOT select count(*) from stdin where not (int(1)>123 and int(_5)<200); logical operator is null Returns true/false for null indication in expression logical operator and NULL is not null Returns true/false for null indication in expression logical operator and NULL unknown state Review null-handle and observe the results of logical operations with NULL. The query returns 0 . select count(*) from stdin where null and (3>2); Arithmetic operator with NULL unknown state Review null-handle and observe the results of binary operations with NULL. The query returns 0 . select count(*) from stdin where (null+1) and (3>2); Compare with NULL unknown state Review null-handle and observe results of compare operations with NULL. The query returns 0 . select count(*) from stdin where (null*1.5) != 3; missing column unknown state select count(*) from stdin where _1 is null; projection column Similar to if or then or else select case when (1+1==(2+1)*3) then 'case_1' when 4*3)==(12 then 'case_2' else 'case_else' end, age*2 from stdin; logical operator coalesce returns first non-null argument select coalesce(nullif(5,5),nullif(1,1.0),age+12) from stdin; logical operator nullif returns null in case both arguments are equal, or else the first one, nullif(1,1)=NULL nullif(null,1)=NULL nullif(2,1)=2 select nullif(cast(_1 as int),cast(_2 as int)) from stdin; logical operator {expression} in ( .. {expression} ..) select count(*) from stdin where 'ben' in (trim(_5),substring(_1,char_length(_1)-3,3),last_name); logical operator {expression} between {expression} and {expression} select count(*) from stdin where substring(_3,char_length(_3),1) between "x" and trim(_1) and substring(_3,char_length(_3)-1,1) == ":"; logical operator {expression} like {match-pattern} select count( ) from stdin where first_name like '%de_'; select count( ) from stdin where _1 like "%a[r-s]; casting operator select cast(123 as int)%2 from stdin; casting operator select cast(123.456 as float)%2 from stdin; casting operator select cast('ABC0-9' as string),cast(substr('ab12cd',3,2) as int)*4 from stdin; casting operator select cast(substring('publish on 2007-01-01',12,10) as timestamp) from stdin; non AWS casting operator select int(_1),int( 1.2 + 3.4) from stdin; non AWS casting operator select float(1.2) from stdin; non AWS casting operator select timestamp('1999:10:10-12:23:44') from stdin; Aggregation Function sun select sum(int(_1)) from stdin; Aggregation Function avg select avg(cast(_1 a float) + cast(_2 as int)) from stdin; Aggregation Function min select avg(cast(_1 a float) + cast(_2 as int)) from stdin; Aggregation Function max select max(float(_1)),min(int(_5)) from stdin; Aggregation Function count select count(*) from stdin where (int(1)+int(_3))>int(_5); Timestamp Functions extract select count(*) from stdin where extract('year',timestamp(_2)) > 1950 and extract('year',timestamp(_1)) < 1960; Timestamp Functions dateadd select count(0) from stdin where datediff('year',timestamp(_1),dateadd('day',366,timestamp(_1))) == 1; Timestamp Functions datediff select count(0) from stdin where datediff('month',timestamp(_1),timestamp(_2))) == 2; Timestamp Functions utcnow select count(0) from stdin where datediff('hours',utcnow(),dateadd('day',1,utcnow())) == 24 String Functions substring select count(0) from stdin where int(substring(_1,1,4))>1950 and int(substring(_1,1,4))<1960; String Functions trim select trim(' foobar ') from stdin; String Functions trim select trim(trailing from ' foobar ') from stdin; String Functions trim select trim(leading from ' foobar ') from stdin; String Functions trim select trim(both '12' from '1112211foobar22211122') from stdin; String Functions lower or upper select trim(both '12' from '1112211foobar22211122') from stdin; String Functions char_length, character_length select count(*) from stdin where char_length(_3)==3; Complex queries select sum(cast(_1 as int)),max(cast(_3 as int)), substring('abcdefghijklm', (2-1)*3+sum(cast(_1 as int))/sum(cast(_1 as int))+1, (count() + count(0))/count(0)) from stdin; alias support select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from stdin where a3>100 and a3<300; Additional Resources See Amazon's S3 Select Object Content API for more details. 3.6.3. S3 supported select functions S3 select supports the following functions: .Timestamp timestamp(string) Description Converts string to the basic type of timestamp. Supported Currently it converts: yyyy:mm:dd hh:mi:dd extract(date-part,timestamp) Description Returns integer according to date-part extract from input timestamp. Supported date-part: year,month,week,day. dateadd(date-part ,integer,timestamp) Description Returns timestamp, a calculation based on the results of input timestamp and date-part. Supported date-part : year,month,day. datediff(date-part,timestamp,timestamp) Description Return an integer, a calculated result of the difference between two timestamps according to date-part. Supported date-part : year,month,day,hours. utcnow() Description Return timestamp of current time. Aggregation count() Description Returns integers based on the number of rows that match a condition if there is one. sum(expression) Description Returns a summary of expression on each row that matches a condition if there is one. avg(expression) Description Returns an average expression on each row that matches a condition if there is one. max(expression) Description Returns the maximal result for all expressions that match a condition if there is one. min(expression) Description Returns the minimal result for all expressions that match a condition if there is one. String substring(string,from,to) Description Returns a string extract from input string based on from and to inputs. Char_length Description Returns a number of characters in string. Character_length also does the same. Trim Description Trims the leading or trailing characters from the target string, default is a blank character. Upper\lower Description Converts characters into uppercase or lowercase. NULL The NULL value is missing or unknown that is NULL can not produce a value on any arithmetic operations. The same applies to arithmetic comparison, any comparison to NULL is NULL that is unknown. Table 3.4. The NULL use case A is NULL Result(NULL=UNKNOWN) Not A NULL A or False NULL A or True True A or A NULL A and False False A and True NULL A and A NULL Additional Resources See Amazon's S3 Select Object Content API for more details. 3.6.4. S3 alias programming construct Alias programming construct is an essential part of the s3 select language because it enables better programming with objects that contain many columns or complex queries. When a statement with alias construct is parsed, it replaces the alias with a reference to the right projection column and on query execution, the reference is evaluated like any other expression. Alias maintains result-cache that is if an alias is used more than once, the same expression is not evaluated and the same result is returned because the result from the cache is used. Currently, Red Hat supports the column alias. Example 3.6.5. S3 CSV parsing explained You can define the CSV definitions with input serialization, with default values: Use {\n}` for row-delimiter. Use {"} for quote. Use {\} for escape characters. The csv-header-info is parsed, this is the first row in the input object containing the schema. Currently, output serialization and compression-type is not supported. The S3 select engine has a CSV parser which parses S3-objects: Each row ends with row-delimiter. The field-separator separates the adjacent columns. The successive field separator defines the NULL column. The quote-character overrides the field-separator that is the filed separator is any character between the quotes. The escape character disables any special character except the row delimiter. The following are examples of CSV parsing rules: Table 3.5. CSV parsing Feature Description Input (Tokens) NULL Successive field delimiter ,,1,,2, =⇒ {null}{null}{1}{null}{2}{null} QUOTE The quote character overrides field delimiter. 11,22,"a,b,c,d",last =⇒ {11}{22}{"a,b,c,d"}{last} Escape The escape character overrides the meta-character. A container for the object owner's ID and DisplayName row delimiter There is no closed quote, row delimiter is the closing line. 11,22,a="str,44,55,66 =⇒ {11}{22}{a="str,44,55,66} csv header info FileHeaderInfo tag USE value means each token on first line is column-name, IGNORE value means to skip the first line. Additional Resources See Amazon's S3 Select Object Content API for more details. 3.7. Additional Resources See Appendix B, S3 common request headers for Amazon S3 common request headers. See Appendix C, S3 common response status codes for Amazon S3 common response status codes. See Appendix D, S3 unsupported header fields for unsupported header fields.
[ "HTTP/1.1 PUT /buckets/bucket/object.mpeg Host: cname.domain.com Date: Mon, 2 Jan 2012 00:01:01 +0000 Content-Encoding: mpeg Content-Length: 9999999 Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --reload", "yum install dnsmasq echo \"address=/. FQDN_OF_GATEWAY_NODE / IP_OF_GATEWAY_NODE \" | tee --append /etc/dnsmasq.conf systemctl start dnsmasq systemctl enable dnsmasq", "systemctl stop NetworkManager systemctl disable NetworkManager", "echo \"DNS1= IP_OF_GATEWAY_NODE \" | tee --append /etc/sysconfig/network-scripts/ifcfg-eth0 echo \" IP_OF_GATEWAY_NODE FQDN_OF_GATEWAY_NODE \" | tee --append /etc/hosts systemctl restart network systemctl enable network systemctl restart dnsmasq", "[user@rgw ~]USD ping mybucket. FQDN_OF_GATEWAY_NODE", "yum install ruby", "gem install aws-s3", "[user@dev ~]USD mkdir ruby_aws_s3 [user@dev ~]USD cd ruby_aws_s3", "[user@dev ~]USD vim conn.rb", "#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => ' FQDN_OF_GATEWAY_NODE ', :port => '8080', :access_key_id => ' MY_ACCESS_KEY ', :secret_access_key => ' MY_SECRET_KEY ' )", "#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => 'testclient.englab.pnq.redhat.com', :port => '8080', :access_key_id => '98J4R9P22P5CDL65HKP8', :secret_access_key => '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049' )", "[user@dev ~]USD chmod +x conn.rb", "[user@dev ~]USD ./conn.rb | echo USD?", "[user@dev ~]USD vim create_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.create('my-new-bucket1')", "[user@dev ~]USD chmod +x create_bucket.rb", "[user@dev ~]USD ./create_bucket.rb", "[user@dev ~]USD vim list_owned_buckets.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Service.buckets.each do |bucket| puts \"{bucket.name}\\t{bucket.creation_date}\" end", "[user@dev ~]USD chmod +x list_owned_buckets.rb", "[user@dev ~]USD ./list_owned_buckets.rb", "my-new-bucket1 2020-01-21 10:33:19 UTC", "[user@dev ~]USD vim create_object.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.store( 'hello.txt', 'Hello World!', 'my-new-bucket1', :content_type => 'text/plain' )", "[user@dev ~]USD chmod +x create_object.rb", "[user@dev ~]USD ./create_object.rb", "[user@dev ~]USD vim list_bucket_content.rb", "#!/usr/bin/env ruby load 'conn.rb' new_bucket = AWS::S3::Bucket.find('my-new-bucket1') new_bucket.each do |object| puts \"{object.key}\\t{object.about['content-length']}\\t{object.about['last-modified']}\" end", "[user@dev ~]USD chmod +x list_bucket_content.rb", "[user@dev ~]USD ./list_bucket_content.rb", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1')", "[user@dev ~]USD chmod +x del_empty_bucket.rb", "[user@dev ~]USD ./del_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim del_non_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1', :force => true)", "[user@dev ~]USD chmod +x del_non_empty_bucket.rb", "[user@dev ~]USD ./del_non_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim delete_object.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.delete('hello.txt', 'my-new-bucket1')", "[user@dev ~]USD chmod +x delete_object.rb", "[user@dev ~]USD ./delete_object.rb", "yum install ruby", "gem install aws-sdk", "[user@dev ~]USD mkdir ruby_aws_sdk [user@dev ~]USD cd ruby_aws_sdk", "[user@dev ~]USD vim conn.rb", "#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http:// FQDN_OF_GATEWAY_NODE :8080', access_key_id: ' MY_ACCESS_KEY ', secret_access_key: ' MY_SECRET_KEY ', force_path_style: true, region: 'us-east-1' )", "#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http://testclient.englab.pnq.redhat.com:8080', access_key_id: '98J4R9P22P5CDL65HKP8', secret_access_key: '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049', force_path_style: true, region: 'us-east-1' )", "[user@dev ~]USD chmod +x conn.rb", "[user@dev ~]USD ./conn.rb | echo USD?", "[user@dev ~]USD vim create_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.create_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x create_bucket.rb", "[user@dev ~]USD ./create_bucket.rb", "[user@dev ~]USD vim list_owned_buckets.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_buckets.buckets.each do |bucket| puts \"{bucket.name}\\t{bucket.creation_date}\" end", "[user@dev ~]USD chmod +x list_owned_buckets.rb", "[user@dev ~]USD ./list_owned_buckets.rb", "my-new-bucket2 2020-01-21 10:33:19 UTC", "[user@dev ~]USD vim create_object.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.put_object( key: 'hello.txt', body: 'Hello World!', bucket: 'my-new-bucket2', content_type: 'text/plain' )", "[user@dev ~]USD chmod +x create_object.rb", "[user@dev ~]USD ./create_object.rb", "[user@dev ~]USD vim list_bucket_content.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_objects(bucket: 'my-new-bucket2').contents.each do |object| puts \"{object.key}\\t{object.size}\" end", "[user@dev ~]USD chmod +x list_bucket_content.rb", "[user@dev ~]USD ./list_bucket_content.rb", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x del_empty_bucket.rb", "[user@dev ~]USD ./del_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim del_non_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new Aws::S3::Bucket.new('my-new-bucket2', client: s3_client).clear! s3_client.delete_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x del_non_empty_bucket.rb", "[user@dev ~]USD ./del_non_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim delete_object.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_object(key: 'hello.txt', bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x delete_object.rb", "[user@dev ~]USD ./delete_object.rb", "yum install php", "[user@dev ~]USD mkdir php_s3 [user@dev ~]USD cd php_s3", "[user@dev ~]USD cp -r ~/Downloads/aws/ ~/php_s3/", "[user@dev ~]USD vim conn.php", "<?php define('AWS_KEY', ' MY_ACCESS_KEY '); define('AWS_SECRET_KEY', ' MY_SECRET_KEY '); define('HOST', ' FQDN_OF_GATEWAY_NODE '); define('PORT', '8080'); // require the AWS SDK for php library require '/ PATH_TO_AWS /aws-autoloader.php'; use Aws\\S3\\S3Client; // Establish connection with host using S3 Client client = S3Client::factory(array( 'base_url' => HOST , 'port' => PORT , 'key' => AWS_KEY , 'secret' => AWS_SECRET_KEY )); ?>", "[user@dev ~]USD php -f conn.php | echo USD?", "[user@dev ~]USD vim create_bucket.php", "<?php include 'conn.php'; client->createBucket(array('Bucket' => 'my-new-bucket3')); ?>", "[user@dev ~]USD php -f create_bucket.php", "[user@dev ~]USD vim list_owned_buckets.php", "<?php include 'conn.php'; blist = client->listBuckets(); echo \"Buckets belonging to \" . blist['Owner']['ID'] . \":\\n\"; foreach (blist['Buckets'] as b) { echo \"{b['Name']}\\t{b['CreationDate']}\\n\"; } ?>", "[user@dev ~]USD php -f list_owned_buckets.php", "my-new-bucket3 2020-01-21 10:33:19 UTC", "[user@dev ~]USD echo \"Hello World!\" > hello.txt", "[user@dev ~]USD vim create_object.php", "<?php include 'conn.php'; key = 'hello.txt'; source_file = './hello.txt'; acl = 'private'; bucket = 'my-new-bucket3'; client->upload(bucket, key, fopen(source_file, 'r'), acl); ?>", "[user@dev ~]USD php -f create_object.php", "[user@dev ~]USD vim list_bucket_content.php", "<?php include 'conn.php'; o_iter = client->getIterator('ListObjects', array( 'Bucket' => 'my-new-bucket3' )); foreach (o_iter as o) { echo \"{o['Key']}\\t{o['Size']}\\t{o['LastModified']}\\n\"; } ?>", "[user@dev ~]USD php -f list_bucket_content.php", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.php", "<?php include 'conn.php'; client->deleteBucket(array('Bucket' => 'my-new-bucket3')); ?>", "[user@dev ~]USD php -f del_empty_bucket.php | echo USD?", "[user@dev ~]USD vim delete_object.php", "<?php include 'conn.php'; client->deleteObject(array( 'Bucket' => 'my-new-bucket3', 'Key' => 'hello.txt', )); ?>", "[user@dev ~]USD php -f delete_object.php", "ceph config set RGW_CLIENT_NAME rgw_sts_key STS_KEY ceph config set RGW_CLIENT_NAME rgw_s3_auth_use_sts true", "ceph config set client.rgw rgw_sts_key 7f8fd8dd4700mnop ceph config set client.rgw rgw_s3_auth_use_sts true", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin --uid USER_NAME --display-name \" DISPLAY_NAME \" --access_key USER_NAME --secret SECRET user create", "[user@rgw ~]USD radosgw-admin --uid TESTER --display-name \"TestUser\" --access_key TESTER --secret test123 user create", "radosgw-admin caps add --uid=\" USER_NAME \" --caps=\"oidc-provider=*\"", "[user@rgw ~]USD radosgw-admin caps add --uid=\"TESTER\" --caps=\"oidc-provider=*\"", "\"{\\\"Version\\\":\\\"2020-01-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"Federated\\\":[\\\"arn:aws:iam:::oidc-provider/ IDP_URL \\\"]},\\\"Action\\\":[\\\"sts:AssumeRoleWithWebIdentity\\\"],\\\"Condition\\\":{\\\"StringEquals\\\":{\\\" IDP_URL :app_id\\\":\\\" AUD_FIELD \\\"\\}\\}\\}\\]\\}\"", "curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \" IDP_URL :8000/ CONTEXT /realms/ REALM /.well-known/openid-configuration\" | jq .", "[user@client ~]USD curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \"http://www.example.com:8000/auth/realms/quickstart/.well-known/openid-configuration\" | jq .", "curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \" IDP_URL / CONTEXT /realms/ REALM /protocol/openid-connect/certs\" | jq .", "[user@client ~]USD curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \"http://www.example.com/auth/realms/quickstart/protocol/openid-connect/certs\" | jq .", "openssl x509 -in CERT_FILE -fingerprint -noout", "[user@client ~]USD openssl x509 -in certificate.crt -fingerprint -noout SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87", "ceph config set RGW_CLIENT_NAME rgw_sts_key STS_KEY ceph config set RGW_CLIENT_NAME rgw_s3_auth_use_sts true", "ceph config set client.rgw rgw_sts_key 7f8fd8dd4700mnop ceph config set client.rgw rgw_s3_auth_use_sts true", "[user@osp ~]USD openstack ec2 credentials create +------------+--------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------+ | access | b924dfc87d454d15896691182fdeb0ef | | links | {u'self': u'http://192.168.0.15/identity/v3/users/ | | | 40a7140e424f493d8165abc652dc731c/credentials/ | | | OS-EC2/b924dfc87d454d15896691182fdeb0ef'} | | project_id | c703801dccaf4a0aaa39bec8c481e25a | | secret | 6a2142613c504c42a94ba2b82147dc28 | | trust_id | None | | user_id | 40a7140e424f493d8165abc652dc731c | +------------+--------------------------------------------------------+", "import boto3 access_key = b924dfc87d454d15896691182fdeb0ef secret_key = 6a2142613c504c42a94ba2b82147dc28 client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.get_session_token( DurationSeconds=43200 )", "s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=https://www.example.com/s3, region_name='') bucket = s3client.create_bucket(Bucket='my-new-shiny-bucket') response = s3client.list_buckets() for bucket in response[\"Buckets\"]: print \"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'], )", "radosgw-admin caps add --uid=\" USER \" --caps=\"roles=*\"", "radosgw-admin caps add --uid=\"gwadmin\" --caps=\"roles=*\"", "radosgw-admin role create --role-name= ROLE_NAME --path= PATH --assume-role-policy-doc= TRUST_POLICY_DOC", "radosgw-admin role create --role-name=S3Access --path=/application_abc/component_xyz/ --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\}", "radosgw-admin role-policy put --role-name= ROLE_NAME --policy-name= POLICY_NAME --policy-doc= PERMISSION_POLICY_DOC", "radosgw-admin role-policy put --role-name=S3Access --policy-name=Policy --policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\[\\\"s3:*\\\"\\],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"\\}\\]\\}", "radosgw-admin user info --uid=gwuser | grep -A1 access_key", "import boto3 access_key = 11BS02LGFB6AL6H1ADMW secret_key = vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.assume_role( RoleArn='arn:aws:iam:::role/application_abc/component_xyz/S3Access', RoleSessionName='Bob', DurationSeconds=3600 )", "class SigV4Auth(BaseSigner): \"\"\" Sign a request with Signature V4. \"\"\" REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4", "def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request)", "client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*'] }]})", "Get / BUCKET ?notification= NOTIFICATION_ID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "Get /testbucket?notification=testnotificationID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <TopicConfiguration> <Id></Id> <Topic></Topic> <Event></Event> <Filter> <S3Key> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Key> <S3Metadata> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Metadata> <S3Tags> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Tags> </Filter> </TopicConfiguration> </NotificationConfiguration>", "DELETE / BUCKET ?notification= NOTIFICATION_ID HTTP/1.1", "DELETE /testbucket?notification=testnotificationID HTTP/1.1", "GET /mybucket HTTP/1.1 Host: cname.domain.com", "GET / HTTP/1.1 Host: mybucket.cname.domain.com", "GET / HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?max-keys=25 HTTP/1.1 Host: cname.domain.com", "PUT / BUCKET HTTP/1.1 Host: cname.domain.com x-amz-acl: public-read-write Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?website-configuration=HTTP/1.1", "PUT /testbucket?website-configuration=HTTP/1.1", "GET / BUCKET ?website-configuration=HTTP/1.1", "GET /testbucket?website-configuration=HTTP/1.1", "DELETE / BUCKET ?website-configuration=HTTP/1.1", "DELETE /testbucket?website-configuration=HTTP/1.1", "DELETE / BUCKET HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "<LifecycleConfiguration> <Rule> <Prefix/> <Status>Enabled</Status> <Expiration> <Days>10</Days> </Expiration> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> <Rule> <Status>Enabled</Status> <Filter> <Prefix>mypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Tag> <Key>key</Key> <Value>value</Value> </Tag> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <And> <Prefix>key-prefix</Prefix> <Tag> <Key>key1</Key> <Value>value1</Value> </Tag> <Tag> <Key>key2</Key> <Value>value2</Value> </Tag> </And> </Filter> </Rule> </LifecycleConfiguration>", "GET / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET <LifecycleConfiguration> <Rule> <Expiration> <Days>10</Days> </Expiration> </Rule> <Rule> </Rule> </LifecycleConfiguration>", "DELETE / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?location HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?versioning HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?versioning HTTP/1.1", "PUT /testbucket?versioning HTTP/1.1", "GET / BUCKET ?acl HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?acl HTTP/1.1", "GET / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "DELETE / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?versions HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "HEAD / BUCKET HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?uploads HTTP/1.1", "cat > examplepol { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam::usfolks:user/fred\"]}, \"Action\": \"s3:PutObjectAcl\", \"Resource\": [ \"arn:aws:s3:::happybucket/*\" ] }] } s3cmd setpolicy examplepol s3://happybucket s3cmd delpolicy s3://happybucket", "GET / BUCKET ?requestPayment HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?requestPayment HTTP/1.1 Host: cname.domain.com", "https://rgw.domain.com/tenant:bucket", "from boto.s3.connection import S3Connection, OrdinaryCallingFormat c = S3Connection( aws_access_key_id=\"TESTER\", aws_secret_access_key=\"test123\", host=\"rgw.domain.com\", calling_format = OrdinaryCallingFormat() ) bucket = c.get_bucket(\"tenant:bucket\")", "{ \"Principal\": \"*\", \"Resource\": \"*\", \"Action\": \"s3:PutObject\", \"Effect\": \"Allow\", \"Condition\": { \"StringLike\": {\"aws:SourceVpc\": \"vpc-*\"}} }", "{ \"Principal\": \"*\", \"Resource\": \"*\", \"Action\": \"s3:PutObject\", \"Effect\": \"Allow\", \"Condition\": {\"StringEquals\": {\"aws:SourceVpc\": \"vpc-91237329\"}} }", "GET /v20180820/configuration/publicAccessBlock HTTP/1.1 Host: cname.domain.com x-amz-account-id: _ACCOUNTID_", "PUT /?publicAccessBlock HTTP/1.1 Host: Bucket.s3.amazonaws.com Content-MD5: ContentMD5 x-amz-sdk-checksum-algorithm: ChecksumAlgorithm x-amz-expected-bucket-owner: ExpectedBucketOwner <?xml version=\"1.0\" encoding=\"UTF-8\"?> <PublicAccessBlockConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <BlockPublicAcls>boolean</BlockPublicAcls> <IgnorePublicAcls>boolean</IgnorePublicAcls> <BlockPublicPolicy>boolean</BlockPublicPolicy> <RestrictPublicBuckets>boolean</RestrictPublicBuckets> </PublicAccessBlockConfiguration>", "DELETE /v20180820/configuration/publicAccessBlock HTTP/1.1 Host: s3-control.amazonaws.com x-amz-account-id: AccountId", "GET / BUCKET / OBJECT HTTP/1.1", "GET / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "GET / BUCKET / OBJECT ?partNumber= PARTNUMBER &versionId= VersionId HTTP/1.1 Host: Bucket.s3.amazonaws.com If-Match: IfMatch If-Modified-Since: IfModifiedSince If-None-Match: IfNoneMatch If-Unmodified-Since: IfUnmodifiedSince Range: Range", "HEAD / BUCKET / OBJECT HTTP/1.1", "HEAD / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "PUT / BUCKET ?object-lock HTTP/1.1", "PUT /testbucket?object-lock HTTP/1.1", "GET / BUCKET ?object-lock HTTP/1.1", "GET /testbucket?object-lock HTTP/1.1", "PUT / BUCKET / OBJECT ?legal-hold&versionId= HTTP/1.1", "PUT /testbucket/testobject?legal-hold&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?legal-hold&versionId= HTTP/1.1", "GET /testbucket/testobject?legal-hold&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT ?retention&versionId= HTTP/1.1", "PUT /testbucket/testobject?retention&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?retention&versionId= HTTP/1.1", "GET /testbucket/testobject?retention&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "PUT /testbucket/testobject?tagging&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "GET /testbucket/testobject?tagging&versionId= HTTP/1.1", "DELETE / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "DELETE /testbucket/testobject?tagging&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT HTTP/1.1", "DELETE / BUCKET / OBJECT HTTP/1.1", "DELETE / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "POST / BUCKET / OBJECT ?delete HTTP/1.1", "GET / BUCKET / OBJECT ?acl HTTP/1.1", "GET / BUCKET / OBJECT ?versionId= VERSION_ID &acl HTTP/1.1", "PUT / BUCKET / OBJECT ?acl", "PUT / DEST_BUCKET / DEST_OBJECT HTTP/1.1 x-amz-copy-source: SOURCE_BUCKET / SOURCE_OBJECT", "POST / BUCKET / OBJECT HTTP/1.1", "OPTIONS / OBJECT HTTP/1.1", "POST / BUCKET / OBJECT ?uploads", "PUT / BUCKET / OBJECT ?partNumber=&uploadId= UPLOAD_ID HTTP/1.1", "GET / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "POST / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "PUT / BUCKET / OBJECT ?partNumber=PartNumber&uploadId= UPLOAD_ID HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "DELETE / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "select customerid from s3Object where age>30 and age<65;", "POST / BUCKET / KEY ?select&select-type=2 HTTP/1.1\\r\\n", "POST /testbucket/sample1csv?select&select-type=2 HTTP/1.1\\r\\n", "{:event-type,records} {:content-type,application/octet-stream} :message-type,event}", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket BUCKET_NAME --expression-type 'SQL' --input-serialization '{\"CSV\": {\"FieldDelimiter\": \",\" , \"QuoteCharacter\": \"\\\"\" , \"RecordDelimiter\" : \"\\n\" , \"QuoteEscapeCharacter\" : \"\\\\\" , \"FileHeaderInfo\": \"USE\" }, \"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key OBJECT_NAME --expression \"select count(0) from stdin where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket testbucket --expression-type 'SQL' --input-serialization '{\"CSV\": {\"FieldDelimiter\": \",\" , \"QuoteCharacter\": \"\\\"\" , \"RecordDelimiter\" : \"\\n\" , \"QuoteEscapeCharacter\" : \"\\\\\" , \"FileHeaderInfo\": \"USE\" }, \"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key testobject --expression \"select count(0) from stdin where int(_1)<10;\" output.csv", "select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from s3object where a3>100 and a3<300;\")" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/developer_guide/ceph-object-gateway-and-the-s3-api
1.3. Configure Authentication in a Security Domain
1.3. Configure Authentication in a Security Domain To configure authentication settings for a security domain, log into the management console and follow this procedure. Procedure 1.1. Setup Authentication Settings for a Security Domain Open the security domain's detailed view. Click the Configuration label at the top of the management console. Select the profile to modify from the Profile selection box at the top left of the Profile view. Note You should only select a profile if you are running Red Hat JBoss Data Virtualizaton in domain mode. Expand the Security menu, and select Security Domains . Click the View link for the security domain you want to edit. Navigate to the Authentication subsystem configuration. Select the Authentication label at the top of the view if it is not already selected. The configuration area is divided into two areas: Login Modules and Details . The login module is the basic unit of configuration. A security domain can include several login modules, each of which can include several attributes and options. Add an authentication module. Click Add to add a JAAS authentication module. Fill in the details for your module. The Code is the class name of the module. The Flag setting controls how the module relates to other authentication modules within the same security domain. Explanation of the Flags The Java Enterprise Edition 6 specification provides the following explanation of the flags for security modules. The following list is taken from https://docs.oracle.com/javase/6/docs/api/javax/security/auth/login/Configuration.html . Refer to that document for more detailed information. Flag Details required The LoginModule is required to succeed. If it succeeds or fails, authentication still continues to proceed down the LoginModule list. requisite LoginModule is required to succeed. If it succeeds, authentication continues down the LoginModule list. If it fails, control immediately returns to the application (authentication does not proceed down the LoginModule list). sufficient The LoginModule is not required to succeed. If it does succeed, control immediately returns to the application (authentication does not proceed down the LoginModule list). If it fails, authentication continues down the LoginModule list. optional The LoginModule is not required to succeed. If it succeeds or fails, authentication still continues to proceed down the LoginModule list. Edit authentication settings After you have added your module, you can modify its Code or Flags by clicking Edit in the Details section of the screen. Be sure the Attributes tab is selected. Optional: Add or remove module options. If you need to add options to your module, click its entry in the Login Modules list, and select the Module Options tab in the Details section of the page. Click the Add button, and provide the key and value for the option. Use the Remove button to remove an option. Reload the Red Hat JBoss Data Virtualization server to make the authentication module changes available to applications. Result Your authentication module is added to the security domain, and is immediately available to applications which use the security domain. The jboss.security.security_domain Module Option By default, each login module defined in a security domain has the jboss.security.security_domain module option added to it automatically. This option causes problems with login modules which check to make sure that only known options are defined. The IBM Kerberos login module, com.ibm.security.auth.module.Krb5LoginModule is one of these. You can disable the behavior of adding this module option by setting the system property to true when starting JBoss EAP 6. Add the following to your start-up parameters. You can also set this property using the web-based Management Console. On a standalone server, you can set system properties in the System Properties section of the configuration. In a managed domain, you can set system properties for each server group.
[ "-Djboss.security.disable.secdomain.option=true" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/configure_authentication_in_a_security_domain
GitOps
GitOps OpenShift Container Platform 4.13 A declarative way to implement continuous deployment for cloud native applications. Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/gitops/index
Appendix D. Proficiency levels
Appendix D. Proficiency levels Each available example teaches concepts that require certain minimum knowledge. This requirement varies by example. The minimum requirements and concepts are organized in several levels of proficiency. In addition to the levels described here, you might need additional information specific to each example. Foundational The examples rated at Foundational proficiency generally require no prior knowledge of the subject matter; they provide general awareness and demonstration of key elements, concepts, and terminology. There are no special requirements except those directly mentioned in the description of the example. Advanced When using Advanced examples, the assumption is that you are familiar with the common concepts and terminology of the subject area of the example in addition to Kubernetes and OpenShift. You must also be able to perform basic tasks on your own, for example, configuring services and applications, or administering networks. If a service is needed by the example, but configuring it is not in the scope of the example, the assumption is that you have the knowledge to properly configure it, and only the resulting state of the service is described in the documentation. Expert Expert examples require the highest level of knowledge of the subject matter. You are expected to perform many tasks based on feature-based documentation and manuals, and the documentation is aimed at most complex scenarios.
null
https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/dekorate_guide_for_spring_boot_developers/proficiency-levels
function::fastcall
function::fastcall Name function::fastcall - Mark function as declared fastcall Synopsis Arguments None Description Call this function before accessing arguments using the *_arg functions if the probed kernel function was declared fastcall in the source.
[ "fastcall()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-fastcall
3.4. Example: List Logical Networks Collection
3.4. Example: List Logical Networks Collection Red Hat Virtualization Manager creates a default ovirtmgmt network on installation. This network acts as the management network for Red Hat Virtualization Manager to access hosts. This network is associated with our Default cluster and is a member of the Default data center. This example uses the ovirtmgmt network to connect our virtual machines. The following request retrieves a representation of the logical networks collection: Example 3.4. List logical networks collection Request: cURL command: Result: The ovirtmgmt network is attached to the Default data center through a relationship using the data center's id code. The ovirtmgmt network is also attached to the Default cluster through a relationship in the cluster's network sub-collection.
[ "GET /ovirt-engine/api/networks HTTP/1.1 Accept: application/xml", "curl -X GET -H \"Accept: application/xml\" -u [USER:PASS] --cacert [CERT] https:// [RHEVM Host] :443/ovirt-engine/api/networks", "HTTP/1.1 200 OK Content-Type: application/xml <networks> <network id=\"00000000-0000-0000-0000-000000000009\" href=\"/ovirt-engine/api/networks/00000000-0000-0000-0000-000000000009\"> <name>ovirtmgmt</name> <description>Management Network</description> <data_center id=\"01a45ff0-915a-11e0-8b87-5254004ac988\" href=\"/ovirt-engine/api/datacenters/01a45ff0-915a-11e0-8b87-5254004ac988\"/> <stp>false</stp> <status> <state>operational</state> </status> <display>false</display> </network> </networks>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/example_list_logical_networks_collection
16.3. Removing a Guest Virtual Machine Entry
16.3. Removing a Guest Virtual Machine Entry If the guest virtual machine is running, unregister the system, by running the following command in a terminal window as root on the guest: If the system has been deleted, however, the virtual service cannot tell whether the service is deleted or paused. In that case, you must manually remove the system from the server side, using the following steps: Login to the Subscription Manager The Subscription Manager is located on the Red Hat Customer Portal . Login to the Customer Portal using your user name and password, by clicking the login icon at the top of the screen. Click the Subscriptions tab Click the Subscriptions tab. Click the Systems link Scroll down the page and click the Systems link. Delete the system To delete the system profile, locate the specified system's profile in the table, select the check box beside its name and click Delete .
[ "subscription-manager unregister" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/virt-who-remove-guests
Chapter 4. Configuring additional devices in an IBM Z or IBM LinuxONE environment
Chapter 4. Configuring additional devices in an IBM Z or IBM LinuxONE environment After installing OpenShift Container Platform, you can configure additional devices for your cluster in an IBM Z(R) or IBM(R) LinuxONE environment, which is installed with z/VM. The following devices can be configured: Fibre Channel Protocol (FCP) host FCP LUN DASD qeth You can configure devices by adding udev rules using the Machine Config Operator (MCO) or you can configure devices manually. Note The procedures described here apply only to z/VM installations. If you have installed your cluster with RHEL KVM on IBM Z(R) or IBM(R) LinuxONE infrastructure, no additional configuration is needed inside the KVM guest after the devices were added to the KVM guests. However, both in z/VM and RHEL KVM environments the steps to configure the Local Storage Operator and Kubernetes NMState Operator need to be applied. Additional resources Machine configuration overview 4.1. Configuring additional devices using the Machine Config Operator (MCO) Tasks in this section describe how to use features of the Machine Config Operator (MCO) to configure additional devices in an IBM Z(R) or IBM(R) LinuxONE environment. Configuring devices with the MCO is persistent but only allows specific configurations for compute nodes. MCO does not allow control plane nodes to have different configurations. Prerequisites You are logged in to the cluster as a user with administrative privileges. The device must be available to the z/VM guest. The device is already attached. The device is not included in the cio_ignore list, which can be set in the kernel parameters. You have created a MachineConfig object file with the following YAML: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: "" 4.1.1. Configuring a Fibre Channel Protocol (FCP) host The following is an example of how to configure an FCP host adapter with N_Port Identifier Virtualization (NPIV) by adding a udev rule. Procedure Take the following sample udev rule 441-zfcp-host-0.0.8000.rules : ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.8000", DRIVER=="zfcp", GOTO="cfg_zfcp_host_0.0.8000" ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="zfcp", TEST=="[ccw/0.0.8000]", GOTO="cfg_zfcp_host_0.0.8000" GOTO="end_zfcp_host_0.0.8000" LABEL="cfg_zfcp_host_0.0.8000" ATTR{[ccw/0.0.8000]online}="1" LABEL="end_zfcp_host_0.0.8000" Convert the rule to Base64 encoded by running the following command: USD base64 /path/to/file/ Copy the following MCO sample profile into a YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3 1 The role you have defined in the machine config file. 2 The Base64 encoded string that you have generated in the step. 3 The path where the udev rule is located. 4.1.2. Configuring an FCP LUN The following is an example of how to configure an FCP LUN by adding a udev rule. You can add new FCP LUNs or add additional paths to LUNs that are already configured with multipathing. Procedure Take the following sample udev rule 41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules : ACTION=="add", SUBSYSTEMS=="ccw", KERNELS=="0.0.8000", GOTO="start_zfcp_lun_0.0.8207" GOTO="end_zfcp_lun_0.0.8000" LABEL="start_zfcp_lun_0.0.8000" SUBSYSTEM=="fc_remote_ports", ATTR{port_name}=="0x500507680d760026", GOTO="cfg_fc_0.0.8000_0x500507680d760026" GOTO="end_zfcp_lun_0.0.8000" LABEL="cfg_fc_0.0.8000_0x500507680d760026" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}="0x00bc000000000000" GOTO="end_zfcp_lun_0.0.8000" LABEL="end_zfcp_lun_0.0.8000" Convert the rule to Base64 encoded by running the following command: USD base64 /path/to/file/ Copy the following MCO sample profile into a YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3 1 The role you have defined in the machine config file. 2 The Base64 encoded string that you have generated in the step. 3 The path where the udev rule is located. 4.1.3. Configuring DASD The following is an example of how to configure a DASD device by adding a udev rule. Procedure Take the following sample udev rule 41-dasd-eckd-0.0.4444.rules : ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.4444", DRIVER=="dasd-eckd", GOTO="cfg_dasd_eckd_0.0.4444" ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="dasd-eckd", TEST=="[ccw/0.0.4444]", GOTO="cfg_dasd_eckd_0.0.4444" GOTO="end_dasd_eckd_0.0.4444" LABEL="cfg_dasd_eckd_0.0.4444" ATTR{[ccw/0.0.4444]online}="1" LABEL="end_dasd_eckd_0.0.4444" Convert the rule to Base64 encoded by running the following command: USD base64 /path/to/file/ Copy the following MCO sample profile into a YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3 1 The role you have defined in the machine config file. 2 The Base64 encoded string that you have generated in the step. 3 The path where the udev rule is located. 4.1.4. Configuring qeth The following is an example of how to configure a qeth device by adding a udev rule. Procedure Take the following sample udev rule 41-qeth-0.0.1000.rules : ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1000", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1001", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1002", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.1000", DRIVER=="qeth", GOTO="cfg_qeth_0.0.1000" GOTO="end_qeth_0.0.1000" LABEL="group_qeth_0.0.1000" TEST=="[ccwgroup/0.0.1000]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1000]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1001]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1002]", GOTO="end_qeth_0.0.1000" ATTR{[drivers/ccwgroup:qeth]group}="0.0.1000,0.0.1001,0.0.1002" GOTO="end_qeth_0.0.1000" LABEL="cfg_qeth_0.0.1000" ATTR{[ccwgroup/0.0.1000]online}="1" LABEL="end_qeth_0.0.1000" Convert the rule to Base64 encoded by running the following command: USD base64 /path/to/file/ Copy the following MCO sample profile into a YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3 1 The role you have defined in the machine config file. 2 The Base64 encoded string that you have generated in the step. 3 The path where the udev rule is located. steps Install and configure the Local Storage Operator Observing and updating the node network state and configuration 4.2. Configuring additional devices manually Tasks in this section describe how to manually configure additional devices in an IBM Z(R) or IBM(R) LinuxONE environment. This configuration method is persistent over node restarts but not OpenShift Container Platform native and you need to redo the steps if you replace the node. Prerequisites You are logged in to the cluster as a user with administrative privileges. The device must be available to the node. In a z/VM environment, the device must be attached to the z/VM guest. Procedure Connect to the node via SSH by running the following command: USD ssh <user>@<node_ip_address> You can also start a debug session to the node by running the following command: USD oc debug node/<node_name> To enable the devices with the chzdev command, enter the following command: USD sudo chzdev -e <device> Additional resources chzdev - Configure IBM Z(R) devices (IBM(R) Documentation) Persistent device configuration (IBM(R) Documentation) 4.3. RoCE network Cards RoCE (RDMA over Converged Ethernet) network cards do not need to be enabled and their interfaces can be configured with the Kubernetes NMState Operator whenever they are available in the node. For example, RoCE network cards are available if they are attached in a z/VM environment or passed through in a RHEL KVM environment. 4.4. Enabling multipathing for FCP LUNs Tasks in this section describe how to manually configure additional devices in an IBM Z(R) or IBM(R) LinuxONE environment. This configuration method is persistent over node restarts but not OpenShift Container Platform native and you need to redo the steps if you replace the node. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . Prerequisites You are logged in to the cluster as a user with administrative privileges. You have configured multiple paths to a LUN with either method explained above. Procedure Connect to the node via SSH by running the following command: USD ssh <user>@<node_ip_address> You can also start a debug session to the node by running the following command: USD oc debug node/<node_name> To enable multipathing, run the following command: USD sudo /sbin/mpathconf --enable To start the multipathd daemon, run the following command: USD sudo multipath Optional: To format your multipath device with fdisk, run the following command: USD sudo fdisk /dev/mapper/mpatha Verification To verify that the devices have been grouped, run the following command: USD sudo multipath -ll Example output mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running steps Install and configure the Local Storage Operator Observing and updating the node network state and configuration
[ "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: \"\"", "ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.8000\", DRIVER==\"zfcp\", GOTO=\"cfg_zfcp_host_0.0.8000\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"zfcp\", TEST==\"[ccw/0.0.8000]\", GOTO=\"cfg_zfcp_host_0.0.8000\" GOTO=\"end_zfcp_host_0.0.8000\" LABEL=\"cfg_zfcp_host_0.0.8000\" ATTR{[ccw/0.0.8000]online}=\"1\" LABEL=\"end_zfcp_host_0.0.8000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3", "ACTION==\"add\", SUBSYSTEMS==\"ccw\", KERNELS==\"0.0.8000\", GOTO=\"start_zfcp_lun_0.0.8207\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"start_zfcp_lun_0.0.8000\" SUBSYSTEM==\"fc_remote_ports\", ATTR{port_name}==\"0x500507680d760026\", GOTO=\"cfg_fc_0.0.8000_0x500507680d760026\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"cfg_fc_0.0.8000_0x500507680d760026\" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}=\"0x00bc000000000000\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"end_zfcp_lun_0.0.8000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3", "ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.4444\", DRIVER==\"dasd-eckd\", GOTO=\"cfg_dasd_eckd_0.0.4444\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"dasd-eckd\", TEST==\"[ccw/0.0.4444]\", GOTO=\"cfg_dasd_eckd_0.0.4444\" GOTO=\"end_dasd_eckd_0.0.4444\" LABEL=\"cfg_dasd_eckd_0.0.4444\" ATTR{[ccw/0.0.4444]online}=\"1\" LABEL=\"end_dasd_eckd_0.0.4444\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3", "ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1001\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1002\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccwgroup\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"cfg_qeth_0.0.1000\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"group_qeth_0.0.1000\" TEST==\"[ccwgroup/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1001]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1002]\", GOTO=\"end_qeth_0.0.1000\" ATTR{[drivers/ccwgroup:qeth]group}=\"0.0.1000,0.0.1001,0.0.1002\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"cfg_qeth_0.0.1000\" ATTR{[ccwgroup/0.0.1000]online}=\"1\" LABEL=\"end_qeth_0.0.1000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3", "ssh <user>@<node_ip_address>", "oc debug node/<node_name>", "sudo chzdev -e <device>", "ssh <user>@<node_ip_address>", "oc debug node/<node_name>", "sudo /sbin/mpathconf --enable", "sudo multipath", "sudo fdisk /dev/mapper/mpatha", "sudo multipath -ll", "mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_z_and_ibm_linuxone/post-install-configure-additional-devices-ibm-z
Chapter 2. Red Hat Ceph Storage considerations and recommendations
Chapter 2. Red Hat Ceph Storage considerations and recommendations As a storage administrator, you can have a basic understanding about what things to consider before running a Red Hat Ceph Storage cluster. Understanding such things as, the hardware and network requirements, understanding what type of workloads work well with a Red Hat Ceph Storage cluster, along with Red Hat's recommendations. Red Hat Ceph Storage can be used for different workloads based on a particular business need or set of requirements. Doing the necessary planning before installing a Red Hat Ceph Storage is critical to the success of running a Ceph storage cluster efficiently and achieving the business requirements. Note Want help with planning a Red Hat Ceph Storage cluster for a specific use case? Contact your Red Hat representative for assistance. 2.1. Prerequisites Time to understand, consider, and plan a storage solution. 2.2. Basic Red Hat Ceph Storage considerations The first consideration for using Red Hat Ceph Storage is developing a storage strategy for the data. A storage strategy is a method of storing data that serves a particular use case. If you need to store volumes and images for a cloud platform like OpenStack, you can choose to store data on faster Serial Attached SCSI (SAS) drives with Solid State Drives (SSD) for journals. By contrast, if you need to store object data for an S3- or Swift-compliant gateway, you can choose to use something more economical, like traditional Serial Advanced Technology Attachment (SATA) drives. Red Hat Ceph Storage can accommodate both scenarios in the same storage cluster, but you need a means of providing the fast storage strategy to the cloud platform, and a means of providing more traditional storage for your object store. One of the most important steps in a successful Ceph deployment is identifying a price-to-performance profile suitable for the storage cluster's use case and workload. It is important to choose the right hardware for the use case. For example, choosing IOPS-optimized hardware for a cold storage application increases hardware costs unnecessarily. Whereas, choosing capacity-optimized hardware for its more attractive price point in an IOPS-intensive workload will likely lead to unhappy users complaining about slow performance. Red Hat Ceph Storage can support multiple storage strategies. Use cases, cost versus benefit performance tradeoffs, and data durability are the primary considerations that help develop a sound storage strategy. Use Cases Ceph provides massive storage capacity, and it supports numerous use cases, such as: The Ceph Block Device client is a leading storage backend for cloud platforms that provides limitless storage for volumes and images with high performance features like copy-on-write cloning. The Ceph Object Gateway client is a leading storage backend for cloud platforms that provides a RESTful S3-compliant and Swift-compliant object storage for objects like audio, bitmap, video, and other data. The Ceph File System for traditional file storage. Cost vs. Benefit of Performance Faster is better. Bigger is better. High durability is better. However, there is a price for each superlative quality, and a corresponding cost versus benefit tradeoff. Consider the following use cases from a performance perspective: SSDs can provide very fast storage for relatively small amounts of data and journaling. Storing a database or object index can benefit from a pool of very fast SSDs, but proves too expensive for other data. SAS drives with SSD journaling provide fast performance at an economical price for volumes and images. SATA drives without SSD journaling provide cheap storage with lower overall performance. When you create a CRUSH hierarchy of OSDs, you need to consider the use case and an acceptable cost versus performance tradeoff. Data Durability In large scale storage clusters, hardware failure is an expectation, not an exception. However, data loss and service interruption remain unacceptable. For this reason, data durability is very important. Ceph addresses data durability with multiple replica copies of an object or with erasure coding and multiple coding chunks. Multiple copies or multiple coding chunks present an additional cost versus benefit tradeoff: it is cheaper to store fewer copies or coding chunks, but it can lead to the inability to service write requests in a degraded state. Generally, one object with two additional copies, or two coding chunks can allow a storage cluster to service writes in a degraded state while the storage cluster recovers. Replication stores one or more redundant copies of the data across failure domains in case of a hardware failure. However, redundant copies of data can become expensive at scale. For example, to store 1 petabyte of data with triple replication would require a cluster with at least 3 petabytes of storage capacity. Erasure coding stores data as data chunks and coding chunks. In the event of a lost data chunk, erasure coding can recover the lost data chunk with the remaining data chunks and coding chunks. Erasure coding is substantially more economical than replication. For example, using erasure coding with 8 data chunks and 3 coding chunks provides the same redundancy as 3 copies of the data. However, such an encoding scheme uses approximately 1.5x the initial data stored compared to 3x with replication. The CRUSH algorithm aids this process by ensuring that Ceph stores additional copies or coding chunks in different locations within the storage cluster. This ensures that the failure of a single storage device or host does not lead to a loss of all of the copies or coding chunks necessary to preclude data loss. You can plan a storage strategy with cost versus benefit tradeoffs, and data durability in mind, then present it to a Ceph client as a storage pool. Important ONLY the data storage pool can use erasure coding. Pools storing service data and bucket indexes use replication. Important Ceph's object copies or coding chunks make RAID solutions obsolete. Do not use RAID, because Ceph already handles data durability, a degraded RAID has a negative impact on performance, and recovering data using RAID is substantially slower than using deep copies or erasure coding chunks. Additional Resources See the Minimum hardware considerations for Red Hat Ceph Storage section of the Red Hat Ceph Storage Installation Guide for more details. 2.3. Red Hat Ceph Storage workload considerations One of the key benefits of a Ceph storage cluster is the ability to support different types of workloads within the same storage cluster using performance domains. Different hardware configurations can be associated with each performance domain. Storage administrators can deploy storage pools on the appropriate performance domain, providing applications with storage tailored to specific performance and cost profiles. Selecting appropriately sized and optimized servers for these performance domains is an essential aspect of designing a Red Hat Ceph Storage cluster. To the Ceph client interface that reads and writes data, a Ceph storage cluster appears as a simple pool where the client stores data. However, the storage cluster performs many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph object storage daemons, referred to as Ceph OSDs, or simply OSDs, both use the Controlled Replication Under Scalable Hashing (CRUSH) algorithm for the storage and retrieval of objects. Ceph OSDs can run in containers within the storage cluster. A CRUSH map describes a topography of cluster resources, and the map exists both on client hosts as well as Ceph Monitor hosts within the cluster. Ceph clients and Ceph OSDs both use the CRUSH map and the CRUSH algorithm. Ceph clients communicate directly with OSDs, eliminating a centralized object lookup and a potential performance bottleneck. With awareness of the CRUSH map and communication with their peers, OSDs can handle replication, backfilling, and recovery-allowing for dynamic failure recovery. Ceph uses the CRUSH map to implement failure domains. Ceph also uses the CRUSH map to implement performance domains, which simply take the performance profile of the underlying hardware into consideration. The CRUSH map describes how Ceph stores data, and it is implemented as a simple hierarchy, specifically an acyclic graph, and a ruleset. The CRUSH map can support multiple hierarchies to separate one type of hardware performance profile from another. Ceph implements performance domains with device "classes". For example, you can have these performance domains coexisting in the same Red Hat Ceph Storage cluster: Hard disk drives (HDDs) are typically appropriate for cost and capacity-focused workloads. Throughput-sensitive workloads typically use HDDs with Ceph write journals on solid state drives (SSDs). IOPS-intensive workloads, such as MySQL and MariaDB, often use SSDs. Figure 2.1. Performance and Failure Domains Workloads Red Hat Ceph Storage is optimized for three primary workloads. Important Carefully consider the workload being run by Red Hat Ceph Storage clusters BEFORE considering what hardware to purchase, because it can significantly impact the price and performance of the storage cluster. For example, if the workload is capacity-optimized and the hardware is better suited to a throughput-optimized workload, then hardware will be more expensive than necessary. Conversely, if the workload is throughput-optimized and the hardware is better suited to a capacity-optimized workload, then the storage cluster can suffer from poor performance. IOPS optimized: Input, output per second (IOPS) optimization deployments are suitable for cloud computing operations, such as running MYSQL or MariaDB instances as virtual machines on OpenStack. IOPS optimized deployments require higher performance storage such as 15k RPM SAS drives and separate SSD journals to handle frequent write operations. Some high IOPS scenarios use all flash storage to improve IOPS and total throughput. An IOPS-optimized storage cluster has the following properties: Lowest cost per IOPS. Highest IOPS per GB. 99th percentile latency consistency. Uses for an IOPS-optimized storage cluster are: Typically block storage. 3x replication for hard disk drives (HDDs) or 2x replication for solid state drives (SSDs). MySQL on OpenStack clouds. Throughput optimized: Throughput-optimized deployments are suitable for serving up significant amounts of data, such as graphic, audio, and video content. Throughput-optimized deployments require high bandwidth networking hardware, controllers, and hard disk drives with fast sequential read and write characteristics. If fast data access is a requirement, then use a throughput-optimized storage strategy. Also, if fast write performance is a requirement, using Solid State Disks (SSD) for journals will substantially improve write performance. A throughput-optimized storage cluster has the following properties: Lowest cost per MBps (throughput). Highest MBps per TB. Highest MBps per BTU. Highest MBps per Watt. 97th percentile latency consistency. Uses for a throughput-optimized storage cluster are: Block or object storage. 3x replication. Active performance storage for video, audio, and images. Streaming media, such as 4k video. Capacity optimized: Capacity-optimized deployments are suitable for storing significant amounts of data as inexpensively as possible. Capacity-optimized deployments typically trade performance for a more attractive price point. For example, capacity-optimized deployments often use slower and less expensive SATA drives and co-locate journals rather than using SSDs for journaling. A cost and capacity-optimized storage cluster has the following properties: Lowest cost per TB. Lowest BTU per TB. Lowest Watts required per TB. Uses for a cost and capacity-optimized storage cluster are: Typically object storage. Erasure coding for maximizing usable capacity Object archive. Video, audio, and image object repositories. 2.4. Network considerations for Red Hat Ceph Storage An important aspect of a cloud storage solution is that storage clusters can run out of IOPS due to network latency, and other factors. Also, the storage cluster can run out of throughput due to bandwidth constraints long before the storage clusters run out of storage capacity. This means that the network hardware configuration must support the chosen workloads to meet price versus performance requirements. Storage administrators prefer that a storage cluster recovers as quickly as possible. Carefully consider bandwidth requirements for the storage cluster network, be mindful of network link oversubscription, and segregate the intra-cluster traffic from the client-to-cluster traffic. Also consider that network performance is increasingly important when considering the use of Solid State Disks (SSD), flash, NVMe, and other high performing storage devices. Ceph supports a public network and a storage cluster network. The public network handles client traffic and communication with Ceph Monitors. The storage cluster network handles Ceph OSD heartbeats, replication, backfilling, and recovery traffic. At a minimum , a single 10 Gb/s Ethernet link should be used for storage hardware, and you can add additional 10 Gb/s Ethernet links for connectivity and throughput. Important Red Hat recommends allocating bandwidth to the storage cluster network, such that it is a multiple of the public network using the osd_pool_default_size as the basis for the multiple on replicated pools. Red Hat also recommends running the public and storage cluster networks on separate network cards. Important Red Hat recommends using 10 Gb/s Ethernet for Red Hat Ceph Storage deployments in production. A 1 Gb/s Ethernet network is not suitable for production storage clusters. In the case of a drive failure, replicating 1 TB of data across a 1 Gb/s network takes 3 hours and replicating 10 TB across a 1 Gb/s network takes 30 hours. Using 10 TB is the typical drive configuration. By contrast, with a 10 Gb/s Ethernet network, the replication times would be 20 minutes for 1 TB and 1 hour for 10 TB. Remember that when a Ceph OSD fails, the storage cluster recovers by replicating the data it contained to other OSDs within the same failure domain and device class as the failed OSD. The failure of a larger domain such as a rack means that the storage cluster utilizes considerably more bandwidth. When building a storage cluster consisting of multiple racks, which is common for large storage implementations, consider utilizing as much network bandwidth between switches in a "fat tree" design for optimal performance. A typical 10 Gb/s Ethernet switch has 48 10 Gb/s ports and four 40 Gb/s ports. Use the 40 Gb/s ports on the spine for maximum throughput. Alternatively, consider aggregating unused 10 Gb/s ports with QSFP+ and SFP+ cables into more 40 Gb/s ports to connect to other rack and spine routers. Also, consider using LACP mode 4 to bond network interfaces. Additionally, use jumbo frames, with a maximum transmission unit (MTU) of 9000, especially on the backend or cluster network. Before installing and testing a Red Hat Ceph Storage cluster, verify the network throughput. Most performance-related problems in Ceph usually begin with a networking issue. Simple network issues like a kinked or bent Cat-6 cable could result in degraded bandwidth. Use a minimum of 10 Gb/s ethernet for the front side network. For large clusters, consider using 40 Gb/s ethernet for the backend or cluster network. Important For network optimization, Red Hat recommends using jumbo frames for a better CPU per bandwidth ratio, and a non-blocking network switch back-plane. Red Hat Ceph Storage requires the same MTU value throughout all networking devices in the communication path, end-to-end for both public and cluster networks. Verify that the MTU value is the same on all hosts and networking equipment in the environment before using a Red Hat Ceph Storage cluster in production. Additional Resources See the Configuring a private network section in the Red Hat Ceph Storage Configuration Guide for more details. See the Configuring a public network section in the Red Hat Ceph Storage Configuration Guide for more details. See the Configuring multiple public networks to the cluster section in the Red Hat Ceph Storage Configuration Guide for more details. 2.5. Considerations for using a RAID controller with OSD hosts Optionally, you can consider using a RAID controller on the OSD hosts. Here are some things to consider: If an OSD host has a RAID controller with 1-2 Gb of cache installed, enabling the write-back cache might result in increased small I/O write throughput. However, the cache must be non-volatile. Most modern RAID controllers have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power-loss event. It is important to understand how a particular controller and its firmware behave after power is restored. Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers and some firmware do not provide such information. Verify that disk level caches are disabled to avoid file system corruption. Create a single RAID 0 volume with write-back for each Ceph OSD data drive with write-back cache enabled. If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the RAID controller, then investigate whether the controller and firmware support pass-through mode. Enabling pass-through mode helps avoid caching logic, and generally results in much lower latency for fast media. 2.6. Tuning considerations for the Linux kernel when running Ceph Production Red Hat Ceph Storage clusters generally benefit from tuning the operating system, specifically around limits and memory allocation. Ensure that adjustments are set for all hosts within the storage cluster. You can also open a case with Red Hat support asking for additional guidance. Increase the File Descriptors The Ceph Object Gateway can hang if it runs out of file descriptors. You can modify the /etc/security/limits.conf file on Ceph Object Gateway hosts to increase the file descriptors for the Ceph Object Gateway. Adjusting the ulimit value for Large Storage Clusters When running Ceph administrative commands on large storage clusters, for example, with 1024 Ceph OSDs or more, create an /etc/security/limits.d/50-ceph.conf file on each host that runs administrative commands with the following contents: Replace USER_NAME with the name of the non-root user account that runs the Ceph administrative commands. Note The root user's ulimit value is already set to unlimited by default on Red Hat Enterprise Linux. 2.7. How colocation works and its advantages You can colocate containerized Ceph daemons on the same host. Here are the advantages of colocating some of Ceph's services: Significant improvement in total cost of ownership (TCO) at small scale Reduction from six hosts to three for the minimum configuration Easier upgrade Better resource isolation How Colocation Works With the help of the Cephadm orchestrator, you can colocate one daemon from the following list with one or more OSD daemons (ceph-osd): Ceph Monitor ( ceph-mon ) and Ceph Manager ( ceph-mgr ) daemons NFS Ganesha ( nfs-ganesha ) for Ceph Object Gateway (nfs-ganesha) RBD Mirror ( rbd-mirror ) Observability Stack (Grafana) Additionally, for Ceph Object Gateway ( radosgw ) (RGW) and Ceph File System ( ceph-mds ), you can colocate either with an OSD daemon plus a daemon from the above list, excluding RBD mirror. Note Collocating two of the same kind of daemons on a given node is not supported. Note Because ceph-mon and ceph-mgr work together closely they do not count as two separate daemons for the purposes of colocation. Note Red Hat recommends colocating the Ceph Object Gateway with Ceph OSD containers to increase performance. With the colocation rules shared above, we have the following minimum clusters sizes that comply with these rules: Example 1 Media: Full flash systems (SSDs) Use case: Block (RBD) and File (CephFS), or Object (Ceph Object Gateway) Number of nodes: 3 Replication scheme: 2 Host Daemon Daemon Daemon host1 OSD Monitor/Manager Grafana host2 OSD Monitor/Manager RGW or CephFS host3 OSD Monitor/Manager RGW or CephFS Note The minimum size for a storage cluster with three replicas is four nodes. Similarly, the size of a storage cluster with two replicas is a three node cluster. It is a requirement to have a certain number of nodes for the replication factor with an extra node in the cluster to avoid extended periods with the cluster in a degraded state. Figure 2.2. Colocated Daemons Example 1 Example 2 Media: Full flash systems (SSDs) or spinning devices (HDDs) Use case: Block (RBD), File (CephFS), and Object (Ceph Object Gateway) Number of nodes: 4 Replication scheme: 3 Host Daemon Daemon Daemon host1 OSD Grafana CephFS host2 OSD Monitor/Manager RGW host3 OSD Monitor/Manager RGW host4 OSD Monitor/Manager CephFS Figure 2.3. Colocated Daemons Example 2 Example 3 Media: Full flash systems (SSDs) or spinning devices (HDDs) Use case: Block (RBD), Object (Ceph Object Gateway), and NFS for Ceph Object Gateway Number of nodes: 4 Replication scheme: 3 Host Daemon Daemon Daemon host1 OSD Grafana host2 OSD Monitor/Manager RGW host3 OSD Monitor/Manager RGW host4 OSD Monitor/Manager NFS (RGW) Figure 2.4. Colocated Daemons Example 3 The diagrams below shows the differences between storage clusters with colocated and non-colocated daemons. Figure 2.5. Colocated Daemons Figure 2.6. Non-colocated Daemons 2.8. Operating system requirements for Red Hat Ceph Storage Red Hat Enterprise Linux entitlements are included in the Red Hat Ceph Storage subscription. The release of Red Hat Ceph Storage 5 is supported on Red Hat Enterprise Linux 8.4 EUS or later. Red Hat Ceph Storage 5 is supported on container-based deployments only. Use the same operating system version, architecture, and deployment type across all nodes. For example, do not use a mixture of nodes with both AMD64 and Intel 64 architectures, a mixture of nodes with Red Hat Enterprise Linux 8 operating systems, or a mixture of nodes with container-based deployments. Important Red Hat does not support clusters with heterogeneous architectures, operating system versions, or deployment types. SELinux By default, SELinux is set to Enforcing mode and the ceph-selinux packages are installed. For additional information on SELinux, see the Data Security and Hardening Guide , and Red Hat Enterprise Linux 8 Using SELinux Guide . Additional Resources The documentation set for Red Hat Enterprise Linux 8. 2.9. Minimum hardware considerations for Red Hat Ceph Storage Red Hat Ceph Storage can run on non-proprietary commodity hardware. Small production clusters and development clusters can run without performance optimization with modest hardware. Note Disk space requirements are based on the Ceph daemons' default path under /var/lib/ceph/ directory. Table 2.1. Containers Process Criteria Minimum Recommended ceph-osd-container Processor 1x AMD64 or Intel 64 CPU CORE per OSD container. RAM Minimum of 5 GB of RAM per OSD container. Number of nodes Minimum of 3 nodes required. OS Disk 1x OS disk per host. OSD Storage 1x storage drive per OSD container. Cannot be shared with OS Disk. block.db Optional, but Red Hat recommended, 1x SSD or NVMe or Optane partition or lvm per daemon. Sizing is 4% of block.data for BlueStore for object, file and mixed workloads and 1% of block.data for the BlueStore for Block Device, Openstack cinder, and Openstack cinder workloads. block.wal Optionally, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it's faster than the block.db device. Network 2x 10 GB Ethernet NICs ceph-mon-container Processor 1x AMD64 or Intel 64 CPU CORE per mon-container RAM 3 GB per mon-container Disk Space 10 GB per mon-container , 50 GB Recommended Monitor Disk Optionally, 1x SSD disk for Monitor rocksdb data Network 2x 1 GB Ethernet NICs, 10 GB Recommended Prometheus 20 GB to 50 GB under /var/lib/ceph/ directory created as a separate file system to protect the contents under /var/ directory. ceph-mgr-container Processor 1x AMD64 or Intel 64 CPU CORE per mgr-container RAM 3 GB per mgr-container Network 2x 1 GB Ethernet NICs, 10 GB Recommended ceph-radosgw-container Processor 1x AMD64 or Intel 64 CPU CORE per radosgw-container RAM 1 GB per daemon Disk Space 5 GB per daemon Network 1x 1 GB Ethernet NICs ceph-mds-container Processor 1x AMD64 or Intel 64 CPU CORE per mds-container RAM 3 GB per mds-container This number is highly dependent on the configurable MDS cache size. The RAM requirement is typically twice as much as the amount set in the mds_cache_memory_limit configuration setting. Note also that this is the memory for your daemon, not the overall system memory. Disk Space 2 GB per mds-container , plus taking into consideration any additional space required for possible debug logging, 20GB is a good start. 2.10. Additional Resources If you want to take a deeper look into Ceph's various internal components, and the strategies around those components, see the Red Hat Ceph Storage Storage Strategies Guide for more details.
[ "ceph soft nofile unlimited", "USER_NAME soft nproc unlimited" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/installation_guide/red-hat-ceph-storage-considerations-and-recommendations
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/proc_providing-feedback-on-red-hat-documentation
function::ipmib_remote_addr
function::ipmib_remote_addr Name function::ipmib_remote_addr - Get the remote ip address Synopsis Arguments skb pointer to a struct sk_buff SourceIsLocal flag to indicate whether local operation Description Returns the remote ip address from skb .
[ "ipmib_remote_addr:long(skb:long,SourceIsLocal:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-remote-addr
Chapter 159. Ignite ID Generator Component
Chapter 159. Ignite ID Generator Component Available as of Camel version 2.17 The Ignite ID Generator endpoint is one of camel-ignite endpoints which allows you to interact with Ignite Atomic Sequences and ID Generators . This endpoint only supports producers. 159.1. Options The Ignite ID Generator component supports 4 options, which are listed below. Name Description Default Type ignite (producer) Sets the Ignite instance. Ignite configurationResource (producer) Sets the resource from where to load the configuration. It can be a: URI, String (URI) or an InputStream. Object igniteConfiguration (producer) Allows the user to set a programmatic IgniteConfiguration. IgniteConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Ignite ID Generator endpoint is configured using URI syntax: with the following path and query parameters: 159.1.1. Path Parameters (1 parameters): Name Description Default Type name Required The sequence name. String 159.1.2. Query Parameters (6 parameters): Name Description Default Type batchSize (producer) The batch size. Integer initialValue (producer) The initial value. 0 Long operation (producer) The operation to invoke on the Ignite ID Generator. Superseded by the IgniteConstants.IGNITE_IDGEN_OPERATION header in the IN message. Possible values: ADD_AND_GET, GET, GET_AND_ADD, GET_AND_INCREMENT, INCREMENT_AND_GET. IgniteIdGenOperation propagateIncomingBodyIfNo ReturnValue (producer) Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void. true boolean treatCollectionsAsCache Objects (producer) Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc. false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 159.2. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.ignite-idgen.configuration-resource Sets the resource from where to load the configuration. It can be a: URI, String (URI) or an InputStream. The option is a java.lang.Object type. String camel.component.ignite-idgen.enabled Enable ignite-idgen component true Boolean camel.component.ignite-idgen.ignite Sets the Ignite instance. The option is a org.apache.ignite.Ignite type. String camel.component.ignite-idgen.ignite-configuration Allows the user to set a programmatic IgniteConfiguration. The option is a org.apache.ignite.configuration.IgniteConfiguration type. String camel.component.ignite-idgen.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
[ "ignite-idgen:name" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ignite-idgen-component
8.200. sg3_utils
8.200. sg3_utils 8.200.1. RHBA-2013:0956 - sg3_utils bug fix update Updated sg3_utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The sg3_utils packages contain a collection of tools for SCSI devices that use the Linux SCSI generic (sg) interface. This collection includes utilities for database copying based on "dd" syntax and semantics (the "sg_dd", "sgp_dd" and "sgm_dd" commands), INQUIRY data checking and associated pages ("sg_inq"), mode and log page checking ("sg_modes" and "sg_logs"), disk spinning ("sg_start") and self-tests ("sg_senddiag"), as well as other utilities. It also contains the rescan-scsi-bus.sh script. Bug Fix BZ# 920687 The logic used in the show_devices() function (as used by the "sginfo" command) to identify the available storage devices opened these devices by scanning the /dev directory. Consequently, the /dev/snapshot file was opened, causing the system to prepare for suspend/hibernate and, as a part of that, to block hot-added CPUs from being activated. To fix this bug, the logic used in the show_devices() function to decide, which devices to open, has been adjusted. As a result, running the "sginfo -l" command no longer has the unintended side effect of blocking the activation of hot-added CPUs. Users of sg3_utils are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/sg3_utils
Chapter 6. Managing projects
Chapter 6. Managing projects As a cloud administrator, you can create and manage projects. A project is a pool of shared virtual resources, to which you can assign OpenStack users and groups. You can configure the quota of shared virtual resources in each project. You can create multiple projects with Red Hat OpenStack Platform that will not interfere with each other's permissions and resources. Users can be associated with more than one project. Each user must have a role assigned for each project to which they are assigned. 6.1. Creating a project Create a project, add members to the project and set resource limits for the project. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . Click Create Project . On the Project Information tab, enter a name and description for the project. The Enabled check box is selected by default. On the Project Members tab, add members to the project from the All Users list. On the Quotas tab, specify resource limits for the project. Click Create Project . 6.2. Editing a project You can edit a project to change its name or description, enable or temporarily disable it, or update the members in the project. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Edit Project . In the Edit Project window, you can update a project to change its name or description, and enable or temporarily disable the project. On the Project Members tab, add members to the project, or remove them as needed. Click Save . Note The Enabled check box is selected by default. To temporarily disable the project, clear the Enabled check box. To enable a disabled project, select the Enabled check box. 6.3. Deleting a project Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . Select the project that you want to delete. Click Delete Projects . The Confirm Delete Projects window is displayed. Click Delete Projects to confirm the action. The project is deleted and any user pairing is disassociated. 6.4. Updating project quotas Quotas are operational limits that you set for each project to optimize cloud resources. You can set quotas to prevent project resources from being exhausted without notification. You can enforce quotas at both the project and the project-user level. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Modify Quotas . In the Quota tab, modify project quotas as needed. Click Save . Note At present, nested quotas are not yet supported. As such, you must manage quotas individually against projects and subprojects. 6.5. Changing the active project Set a project as the active project so that you can use the dashboard to interact with objects in the project. To set a project as the active project, you must be a member of the project. It is also necessary for the user to be a member of more than one project to have the Set as Active Project option be enabled. You cannot set a disabled project as active, unless it is re-enabled. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Set as Active Project . Alternatively, as a non-admin user, in the project Actions column, click Set as Active Project which becomes the default action in the column. 6.6. Project hierarchies You can nest projects using multitenancy in the Identity service (keystone). Multitenancy allows subprojects to inherit role assignments from a parent project. 6.6.1. Creating hierarchical projects and sub-projects You can implement Hierarchical Multitenancy (HMT) using keystone domains and projects. First create a new domain and then create a project within that domain. You can then add subprojects to that project. You can also promote a user to administrator of a subproject by adding the user to the admin role for that subproject. Note The HMT structure used by keystone is not currently represented in the dashboard. Procedure Create a new keystone domain called corp : Create the parent project ( private-cloud ) within the corp domain: Create a subproject ( dev ) within the private-cloud parent project, while also specifying the corp domain: Create another subproject called qa : Note You can use the Identity API to view the project hierarchy. For more information, see https://developer.openstack.org/api-ref/identity/v3/index.html?expanded=show-project-details-detail 6.6.2. Configuring access to hierarchical projects By default, a newly-created project has no assigned roles. When you assign role permissions to the parent project, you can include the --inherited flag to instruct the subprojects to inherit the assigned permissions from the parent project. For example, a user with admin role access to the parent project also has admin access to the subprojects. Granting access to users View the existing permissions assigned to a project: View the existing roles: Grant the user account user1 access to the private-cloud project: Re-run this command using the --inherited flag. As a result, user1 also has access to the private-cloud subprojects, which have inherited the role assignment: Review the result of the permissions update: The user1 user has inherited access to the qa and dev projects. In addition, because the --inherited flag was applied to the parent project, user1 also receives access to any subprojects that are created later. Removing access from users Explicit and inherited permissions must be separately removed. Remove a user from an explicitly assigned role: Review the result of the change. Notice that the inherited permissions are still present: Remove the inherited permissions: Review the result of the change. The inherited permissions have been removed, and the resulting output is now empty: 6.6.3. Reseller project overview With the Reseller project, the goal is to have a hierarchy of domains; these domains will eventually allow you to consider reselling portions of the cloud, with a subdomain representing a fully-enabled cloud. This work has been split into phases, with phase 1 described below: Phase 1 of reseller Reseller (phase 1) is an extension of Hierarchical Multitenancy (HMT), described here: Creating hierarchical projects and sub-projects . Previously, keystone domains were originally intended to be containers that stored users and projects, with their own table in the database back-end. As a result, domains are now no longer stored in their own table, and have been merged into the project table: A domain is now a type of project, distinguished by the is_domain flag. A domain represents a top-level project in the project hierarchy: domains are roots in the project hierarchy APIs have been updated to create and retrieve domains using the projects subpath: Create a new domain by creating a project with the is_domain flag set to true List projects that are domains: get projects including the is_domain query parameter. 6.7. Project security management Security groups are sets of IP filter rules that can be assigned to project instances, and which define networking access to the instance. Security groups are project specific; project members can edit the default rules for their security group and add new rule sets. All projects have a default security group that is applied to any instance that has no other defined security group. Unless you change the default values, this security group denies all incoming traffic and allows only outgoing traffic from your instance. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Do not delete the default security group without creating groups that allow required egress. For example, if your instances use DHCP and metadata, your instance requires security group rules that allow egress to the DHCP server and metadata agent. 6.7.1. Creating a security group Create a security group so that you can configure security rules. For example, you can enable ICMP traffic, or disable HTTP requests. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Create Security Group . Enter a name and description for the group, and click Create Security Group . 6.7.2. Adding a security group rule By default, rules for a new group only provide outgoing access. You must add new rules to provide additional access. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Manage Rules for the security group that you want to edit. Click Add Rule to add a new rule. Specify the rule values, and click Add . The following rule fields are required: Rule Rule type. If you specify a rule template (for example, 'SSH'), its fields are automatically filled in: TCP: Typically used to exchange data between systems, and for end-user communication. UDP: Typically used to exchange data between systems, particularly at the application level. ICMP: Typically used by network devices, such as routers, to send error or monitoring messages. Direction Ingress (inbound) or Egress (outbound). Open Port For TCP or UDP rules, the Port or Port Range (single port or range of ports) to open: For a range of ports, enter port values in the From Port and To Port fields. For a single port, enter the port value in the Port field. Type The type for ICMP rules; must be in the range '-1:255'. Code The code for ICMP rules; must be in the range '-1:255'. Remote The traffic source for this rule: CIDR (Classless Inter-Domain Routing): IP address block, which limits access to IPs within the block. Enter the CIDR in the Source field. Security Group: Source group that enables any instance in the group to access any other group instance. 6.7.3. Deleting a security group rule Delete security group rules that you no longer require. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Manage Rules for the security group. Select the security group rule, and click Delete Rule . Click Delete Rule again. Note You cannot undo the delete action. 6.7.4. Deleting a security group Delete security groups that you no longer require. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, select the group, and click Delete Security Groups . Click Delete Security Groups . Note You cannot undo the delete action.
[ "openstack domain create corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 69436408fdcb44ab9e111691f8e9216d | | name | corp | +-------------+----------------------------------+", "openstack project create private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | c50d5cf4fe2e4929b98af5abdec3fd64 | | is_domain | False | | name | private-cloud | | parent_id | 69436408fdcb44ab9e111691f8e9216d | +-------------+----------------------------------+", "openstack project create dev --parent private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | 11fccd8369824baa9fc87cf01023fd87 | | is_domain | False | | name | dev | | parent_id | c50d5cf4fe2e4929b98af5abdec3fd64 | +-------------+----------------------------------+", "openstack project create qa --parent private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | b4f1d6f59ddf413fa040f062a0234871 | | is_domain | False | | name | qa | | parent_id | c50d5cf4fe2e4929b98af5abdec3fd64 | +-------------+----------------------------------+", "openstack role assignment list --project private-cloud", "openstack role list +----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+", "openstack role add --user user1 --user-domain corp --project private-cloud member", "openstack role add --user user1 --user-domain corp --project private-cloud member --inherited", "openstack role assignment list --effective --user user1 --user-domain corp +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | c50d5cf4fe2e4929b98af5abdec3fd64 | | False | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | 11fccd8369824baa9fc87cf01023fd87 | | True | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | b4f1d6f59ddf413fa040f062a0234871 | | True | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role remove --user user1 --project private-cloud member", "openstack role assignment list --effective --user user1 --user-domain corp +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | 11fccd8369824baa9fc87cf01023fd87 | | True | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | b4f1d6f59ddf413fa040f062a0234871 | | True | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role remove --user user1 --project private-cloud member --inherited", "openstack role assignment list --effective --user user1 --user-domain corp" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/users_and_identity_management_guide/assembly_managing-projects
Chapter 6. 4.3 Release Notes
Chapter 6. 4.3 Release Notes 6.1. New Features This following major enhancements have been introduced in Red Hat Update Infrastructure 4.3. New Pulp version: 3.21.0 This update introduces a newer version of Pulp, 3.21.0 . Among other upstream bug fixes and enhancements, this version changes how Pulp manages ambiguous CDN repodata that contains a duplicate package name-version-release string. Instead of failing, Pulp logs a warning and allows the affected repository to be synchronized. New rhui-manager command A new rhui-manager command is now available: With this command, you can reinstall all of your CDS nodes using a single command. Additionally, you do not need to specify any of the CDS host names. 6.2. Bug Fixes The following bugs have been fixed in Red Hat Update Infrastructure 4.3 that have a significant impact on users. Alternate Content Source package can now be created Previously, rhui-manager failed to create an Alternate Content Source package. With this update, the problem is now fixed and you can successfully create an Alternate Content Source package. Redundant RHUI code has now been removed With this update, several parts of redundant code have been removed from RHUI. Most notably, the unused entitlement argument in the custom repository creation has been removed. Additionally, the Atomic and OSTree functions have been removed because these features have been deprecated in RHUI 4. port variable has been renamed to remote_port Previously, CDS and HAProxy management used a variable called port . However, port is a reserved playbook keyword in Ansible. Consequently, Ansible printed warnings about the use of this variable. With this update, the variable has been renamed to remote_port which prevents the warnings. rhui-installer now returns the correct status when RHUA installation playbooks fails Previously, when the RHUA installation playbook failed, rhui-installer exited with a status of 0 , which normally indicates success. With this update, the problem has been fixed, and rhui-installer exits with a status of 1 , indicating that the RHUA installation playbook has failed. Proxy-enabled RHUI environments can now synchronize container images Previously, RHUI did not accept proxy server settings when adding container images. Consequently, RHUI was unable to synchronize container images if the proxy server configuration was required to access the container registries. With this update, RHUI now accepts proxy settings when they are configured with the container images. As a result, proxy-enabled RHUI environments can now synchronize container images. Misaligned text on the repository workflow screen is now fixed With this update, the misaligned text on the repository workflow screen in the rhui-manager text interface has been fixed. 6.3. Known Issues This part describes known issues in Red Hat Update Infrastructure 4.3. Rerunning rhui-installer resets custom container configuration Currently, the container configuration registry is managed by editing the rhui-tools.conf configuration file. However, rerunning rhui-installer overwrites the configuration file based on the latest answers file or based on the command line parameters combined with the default template, which doesn't contain any container registry configuration. Consequently, rerunning rhui-installer resets any custom changes to the container configuration registry. To work around this problem, save the current container configuration section from the /etc/rhui/rhui-tools.conf file and reapply the configuration to the new rhui-tools.conf file that is generated after rerunning rhui-installer .
[ "`rhui-manager [--noninteractive] cds reinstall --all`" ]
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-3-release-notes_release-notes
Chapter 2. Specifying link cost
Chapter 2. Specifying link cost When linking sites, you can assign a cost to each link to influence the traffic flow. By default, link cost is set to 1 for a new link. In a service network, the routing algorithm attempts to use the path with the lowest total cost from client to target server. If you have services distributed across different sites, you might want a client to favor a particular target or link. In this case, you can specify a cost of greater than 1 on the alternative links to reduce the usage of those links. Note The distribution of open connections is statistical, that is, not a round robin system. If a connection only traverses one link, then the path cost is equal to the link cost. If the connection traverses more than one link, the path cost is the sum of all the links involved in the path. Cost acts as a threshold for using a path from client to server in the network. When there is only one path, traffic flows on that path regardless of cost. Note If you start with two targets for a service, and one of the targets is no longer available, traffic flows on the remaining path regardless of cost. When there are a number of paths from a client to server instances or a service, traffic flows on the lowest cost path until the number of connections exceeds the cost of an alternative path. After this threshold of open connections is reached, new connections are spread across the alternative path and the lowest cost path. Prerequisite You have set your Kubernetes context to a site that you want to link from . A token for the site that you want to link to . Procedure Create a link to the service network: USD skupper link create <filename> --cost <integer-cost> where <integer-cost> is an integer greater than 1 and traffic favors lower cost links. Note If a service can be called without traversing a link, that service is considered local, with an implicit cost of 0 . For example, create a link with cost set to 2 using a token file named token.yaml : USD skupper link create token.yaml --cost 2 Check the link cost: USD skupper link status link1 --verbose The output is similar to the following: Cost: 2 Created: 2022-11-17 15:02:01 +0000 GMT Name: link1 Namespace: default Site: default-0d99d031-cee2-4cc6-a761-697fe0f76275 Status: Connected Observe traffic using the console. If you have a console on a site, log in and navigate to the processes for each server. You can view the traffic levels corresponding to each client. Note If there are multiple clients on different sites, filter the view to each client to determine the effect of cost on traffic. For example, in a two site network linked with a high cost with servers and clients on both sites, you can see that a client is served by the local servers while a local server is available. 2.1. Exposing services on the service network from a namespace After creating a service network, exposed services can communicate across that network. The skupper CLI has two options for exposing services that already exist in a namespace: expose supports simple use cases, for example, a deployment with a single service. See Section 2.1.1, "Exposing simple services on the service network" for instructions. service create and service bind is a more flexible method of exposing services, for example, if you have multiple services for a deployment. See Section 2.1.2, "Exposing complex services on the service network" for instructions. 2.1.1. Exposing simple services on the service network This section describes how services can be enabled for a service network for simple use cases. Procedure Create a deployment, some pods, or a service in one of your sites, for example: USD kubectl create deployment hello-world-backend --image quay.io/skupper/hello-world-backend This step is not Skupper-specific, that is, this process is unchanged from standard processes for your cluster. Create a service that can communicate on the service network: Deployments and pods USD skupper expose [deployment <name>|pods <selector>] where <name> is the name of a deployment <selector> is a pod selector Kubernetes services Specify a resulting service name using the --address option. USD skupper expose service <name> --address <skupper-service-name> where <name> is the name of a service <skupper-service-name> is the name of the resulting service shared on the service network. StatefulSets You can expose a statefulset using: USD skupper expose statefulset <statefulsetname> A StatefulSet in Kubernetes is often associated with a headless service to provide stable, unique network identifiers for each pod. If you require stable network identifiers for each pod on the service network, use the --headless option. USD skupper expose statefulset --headless Note When you use the '--headless" option, only one statefulset in the service network can be exposed through the address (routing key). For the example deployment in step 1, you can create a service using the following command: Options for the expose command include: --port <port-number> :: Specify the port number that this service is available on the service network. NOTE: You can specify more than one port by repeating this option. --target-port <port-number> :: Specify the port number of pods that you want to expose. --protocol <protocol> allows you specify the protocol you want to use, tcp , http or http2 Note If you do not specify ports, skupper uses the containerPort value of the deployment. Check the status of services exposed on the service network ( -v is only available on Kubernetes): USD skupper service status -v Services exposed through Skupper: ╰─ backend:8080 (tcp) ╰─ Sites: ├─ 4d80f485-52fb-4d84-b10b-326b96e723b2(west) │ policy: disabled ╰─ 316fbe31-299b-490b-9391-7b46507d76f1(east) │ policy: disabled ╰─ Targets: ╰─ backend:8080 name=backend-9d84544df-rbzjx 2.1.2. Exposing complex services on the service network This section describes how services can be enabled for a service network for more complex use cases. Procedure Create a deployment, some pods, or a service in one of your sites, for example: USD kubectl create deployment hello-world-backend --image quay.io/skupper/hello-world-backend This step is not Skupper-specific, that is, this process is unchanged from standard processes for your cluster. Create a service that can communicate on the service network: USD skupper service create <name> <port> where <name> is the name of the service you want to create <port> is the port the service uses For the example deployment in step 1, you create a service using the following command: USD skupper service create hello-world-backend 8080 Bind the service to a cluster service: USD skupper service bind <service-name> <target-type> <target-name> where <service-name> is the name of the service on the service network <target-type> is the object you want to expose, deployment , statefulset , pods , or service . <target-name> is the name of the cluster service For the example deployment in step 1, you bind the service using the following command: USD skupper service bind hello-world-backend deployment hello-world-backend 2.1.3. Exposing services from a different namespace to the service network This section shows how to expose a service from a namespace where Skupper is not deployed. Skupper allows you expose Kubernetes services from other namespaces for any site. However, if you want to expose workloads, for example deployments, you must create a site as described in this section. Prerequisites A namespace where Skupper is deployed. A network policy that allows communication between the namespaces cluster-admin permissions if you want to expose resources other than services Procedure Create a site with cluster permissions if you want to expose a workload from a namespace other than the site namespace: Note The site does not require the extra permissions granted with the --enable-cluster-permissions to expose a Kubernetes service resource. USD skupper init --enable-cluster-permissions To expose a Kubernetes service from a namespace other than the site namespace: USD skupper expose service <service>.<namespace> --address <service> <service> - the name of the service on the service network. <namespace> - the name of the namespace where the service you want to expose runs. For example, if you deployed Skupper in the east namespace and you created a backend Kubernetes service in the east-backend namespace, you set the context to the east namespace and expose the service as backend on the service network using: USD skupper expose service backend.east-backend --port 8080 --address backend To expose a workload from a site created with --enable-cluster-permissions : USD skupper expose <resource> --port <port-number> --target-namespace <namespace> <resource> - the name of the resource. <namespace> - the name of the namespace where the resource you want to expose runs. For example, if you deployed Skupper in the east namespace and you created a backend deployment in the east-backend namespace, you set the context to the east namespace and expose the service as backend on the service network using: USD skupper expose deployment/backend --port 8080 --target-namespace east-backend
[ "skupper link create <filename> --cost <integer-cost>", "skupper link create token.yaml --cost 2", "skupper link status link1 --verbose", "Cost: 2 Created: 2022-11-17 15:02:01 +0000 GMT Name: link1 Namespace: default Site: default-0d99d031-cee2-4cc6-a761-697fe0f76275 Status: Connected", "kubectl create deployment hello-world-backend --image quay.io/skupper/hello-world-backend", "skupper expose [deployment <name>|pods <selector>]", "skupper expose service <name> --address <skupper-service-name>", "skupper expose statefulset <statefulsetname>", "skupper expose statefulset --headless", "skupper expose deployment/hello-world-backend --port 8080", "skupper service status -v Services exposed through Skupper: ╰─ backend:8080 (tcp) ╰─ Sites: ├─ 4d80f485-52fb-4d84-b10b-326b96e723b2(west) │ policy: disabled ╰─ 316fbe31-299b-490b-9391-7b46507d76f1(east) │ policy: disabled ╰─ Targets: ╰─ backend:8080 name=backend-9d84544df-rbzjx", "kubectl create deployment hello-world-backend --image quay.io/skupper/hello-world-backend", "skupper service create <name> <port>", "skupper service create hello-world-backend 8080", "skupper service bind <service-name> <target-type> <target-name>", "skupper service bind hello-world-backend deployment hello-world-backend", "skupper init --enable-cluster-permissions", "skupper expose service <service>.<namespace> --address <service>", "skupper expose service backend.east-backend --port 8080 --address backend", "skupper expose <resource> --port <port-number> --target-namespace <namespace>", "skupper expose deployment/backend --port 8080 --target-namespace east-backend" ]
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/k8sspecifying-link-cost
Chapter 20. OpenSSH
Chapter 20. OpenSSH OpenSSH is a free, open source implementation of the SSH ( S ecure S H ell) protocols. It replaces telnet , ftp , rlogin , rsh , and rcp with secure, encrypted network connectivity tools. OpenSSH supports versions 1.3, 1.5, and 2 of the SSH protocol. Since OpenSSH version 2.9, the default protocol is version 2, which uses RSA keys as the default. 20.1. Why Use OpenSSH? If you use OpenSSH tools, you are enhancing the security of your machine. All communications using OpenSSH tools, including passwords, are encrypted. Telnet and ftp use plain text passwords and send all information unencrypted. The information can be intercepted, the passwords can be retrieved, and your system could be compromised by an unauthorized person logging in to your system using one of the intercepted passwords. The OpenSSH set of utilities should be used whenever possible to avoid these security problems. Another reason to use OpenSSH is that it automatically forwards the DISPLAY variable to the client machine. In other words, if you are running the X Window System on your local machine, and you log in to a remote machine using the ssh command, when you run a program on the remote machine that requires X, it will be displayed on your local machine. This feature is convenient if you prefer graphical system administration tools but do not always have physical access to your server.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/OpenSSH
2.2. perf kvm
2.2. perf kvm You can use the perf command with the kvm option to collect guest operating system statistics from the host. In Red Hat Enterprise Linux, the perf package provides the perf command. Run rpm -q perf to see if the perf package is installed. If it is not installed, and you want to install it to collect and analyze guest operating system statistics, run the following command as the root user: In order to use perf kvm in the host, you must have access to the /proc/modules and /proc/kallsyms files from the guest. There are two methods to achieve this. Refer to the following procedure, Procedure 2.1, "Copying /proc files from guest to host" to transfer the files into the host and run reports on the files. Alternatively, refer to Procedure 2.2, "Alternative: using sshfs to directly access files" to directly mount the guest and access the files. Procedure 2.1. Copying /proc files from guest to host Important If you directly copy the required files (for instance, via scp ) you will only copy files of zero length. This procedure describes how to first save the files in the guest to a temporary location (with the cat command), and then copy them to the host for use by perf kvm . Log in to the guest and save files Log in to the guest and save /proc/modules and /proc/kallsyms to a temporary location, /tmp : Copy the temporary files to the host Once you have logged off from the guest, run the following example scp commands to copy the saved files to the host. You should substitute your host name and TCP port if they are different: You now have two files from the guest ( guest-kallsyms and guest-modules ) on the host, ready for use by perf kvm . Recording and reporting events with perf kvm Using the files obtained in the steps, recording and reporting of events in the guest, the host, or both is now possible. Run the following example command: Note If both --host and --guest are used in the command, output will be stored in perf.data.kvm . If only --host is used, the file will be named perf.data.host . Similarly, if only --guest is used, the file will be named perf.data.guest . Pressing Ctrl-C stops recording. Reporting events The following example command uses the file obtained by the recording process, and redirects the output into a new file, analyze . View the contents of the analyze file to examine the recorded events: # cat analyze # Events: 7K cycles # # Overhead Command Shared Object Symbol # ........ ............ ................. ......................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...] Procedure 2.2. Alternative: using sshfs to directly access files Important This is provided as an example only. You will need to substitute values according to your environment. # Get the PID of the qemu process for the guest: PID=`ps -eo pid,cmd | grep "qemu.*-name GuestMachine" \ | grep -v grep | awk '{print USD1}'` # Create mount point and mount guest mkdir -p /tmp/guestmount/USDPID sshfs -o allow_other,direct_io GuestMachine:/ /tmp/guestmount/USDPID # Begin recording perf kvm --host --guest --guestmount=/tmp/guestmount \ record -a -o perf.data # Ctrl-C interrupts recording. Run report: perf kvm --host --guest --guestmount=/tmp/guestmount report \ -i perf.data # Unmount sshfs to the guest once finished: fusermount -u /tmp/guestmount
[ "install perf", "cat /proc/modules > /tmp/modules cat /proc/kallsyms > /tmp/kallsyms", "scp root@GuestMachine:/tmp/kallsyms guest-kallsyms scp root@GuestMachine:/tmp/modules guest-modules", "perf kvm --host --guest --guestkallsyms=guest-kallsyms --guestmodules=guest-modules record -a -o perf.data", "perf kvm --host --guest --guestmodules=guest-modules report -i perf.data.kvm --force > analyze", "cat analyze Events: 7K cycles # Overhead Command Shared Object Symbol ........ ............ ................. ...................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...]", "Get the PID of the qemu process for the guest: PID=`ps -eo pid,cmd | grep \"qemu.*-name GuestMachine\" | grep -v grep | awk '{print USD1}'` Create mount point and mount guest mkdir -p /tmp/guestmount/USDPID sshfs -o allow_other,direct_io GuestMachine:/ /tmp/guestmount/USDPID Begin recording perf kvm --host --guest --guestmount=/tmp/guestmount record -a -o perf.data Ctrl-C interrupts recording. Run report: perf kvm --host --guest --guestmount=/tmp/guestmount report -i perf.data Unmount sshfs to the guest once finished: fusermount -u /tmp/guestmount" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-Monitoring_Tools-perf_kvm
Chapter 1. Managing data science pipelines
Chapter 1. Managing data science pipelines 1.1. Configuring a pipeline server Before you can successfully create a pipeline in OpenShift AI, you must configure a pipeline server. This task includes configuring where your pipeline artifacts and data are stored. Note You are not required to specify any storage directories when configuring a connection for your pipeline server. When you import a pipeline, the /pipelines folder is created in the root folder of the bucket, containing a YAML file for the pipeline. If you upload a new version of the same pipeline, a new YAML file with a different ID is added to the /pipelines folder. When you run a pipeline, the artifacts are stored in the /pipeline-name folder in the root folder of the bucket. Important If you use an external MySQL database and upgrade to OpenShift AI 2.9 or later, the database is migrated to data science pipelines 2.0 format, making it incompatible with earlier versions of OpenShift AI. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project that you can add a pipeline server to. You have an existing S3-compatible object storage bucket and you have configured write access to your S3 bucket on your storage account. If you are configuring a pipeline server for production pipeline workloads, you have an existing external MySQL or MariaDB database. If you are configuring a pipeline server with an external MySQL database, your database must use at least MySQL version 5.x. However, Red Hat recommends that you use MySQL version 8.x. Note The mysql_native_password authentication plugin is required for the ML Metadata component to successfully connect to your database. mysql_native_password is disabled by default in MySQL 8.4 and later. If your database uses MySQL 8.4 or later, you must update your MySQL deployment to enable the mysql_native_password plugin. For more information about enabling the mysql_native_password plugin, see Native Pluggable Authentication in the MySQL documentation. If you are configuring a pipeline server with a MariaDB database, your database must use MariaDB version 10.3 or later. However, Red Hat recommends that you use at least MariaDB version 10.5. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to configure a pipeline server for. A project details page opens. Click the Pipelines tab. Click Configure pipeline server . The Configure pipeline server dialog appears. In the Object storage connection section, provide values for the mandatory fields: In the Access key field, enter the access key ID for the S3-compatible object storage provider. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket. In the Region field, enter the default region of your S3-compatible object storage account. In the Bucket field, enter the name of your S3-compatible object storage bucket. Important If you specify incorrect connection settings, you cannot update these settings on the same pipeline server. Therefore, you must delete the pipeline server and configure another one. If you want to use an existing artifact that was not generated by a task in a pipeline, you can use the kfp.dsl.importer component to import the artifact from its URI. You can only import these artifacts to the S3-compatible object storage bucket that you define in the Bucket field in your pipeline server configuration. For more information about the kfp.dsl.importer component, see Special Case: Importer Components . In the Database section, click Show advanced database options to specify the database to store your pipeline data and select one of the following sets of actions: Select Use default database stored on your cluster to deploy a MariaDB database in your project. Important The Use default database stored on your cluster option is intended for development and testing purposes only. For production pipeline workloads, select the Connect to external MySQL database option to use an external MySQL or MariaDB database. Select Connect to external MySQL database to add a new connection to an external MySQL or MariaDB database that your pipeline server can access. In the Host field, enter the database's host name. In the Port field, enter the database's port. In the Username field, enter the default user name that is connected to the database. In the Password field, enter the password for the default user account. In the Database field, enter the database name. Click Configure pipeline server . Verification On the Pipelines tab for the project: The Import pipeline button is available. When you click the action menu ( ... ) and then click View pipeline server configuration , the pipeline server details are displayed. 1.1.1. Configuring a pipeline server with an external Amazon RDS database To configure a pipeline server with an external Amazon Relational Database Service (RDS) database, you must configure OpenShift AI to trust the certificates issued by its certificate authorities (CA). Important If you are configuring a pipeline server for production pipeline workloads, Red Hat recommends that you use an external MySQL or MariaDB database. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have logged in to Red Hat OpenShift AI. You have created a data science project that you can add a pipeline server to. You have an existing S3-compatible object storage bucket, and you have configured your storage account with write access to your S3 bucket. Procedure Before configuring your pipeline server, from Amazon RDS: Certificate bundles by AWS Region , download the PEM certificate bundle for the region that the database was created in. For example, if the database was created in the us-east-1 region, download us-east-1-bundle.pem . In a terminal window, log in to the OpenShift cluster where OpenShift AI is deployed. Run the following command to fetch the current OpenShift AI trusted CA configuration and store it in a new file: Run the following command to append the PEM certificate bundle that you downloaded to the new custom CA configuration file: Run the following command to update the OpenShift AI trusted CA configuration to trust certificates issued by the CAs included in the new custom CA configuration file: Configure a pipeline server, as described in Configuring a pipeline server . Verification The pipeline server starts successfully. You can import and run data science pipelines. 1.2. Defining a pipeline The Kubeflow Pipelines SDK enables you to define end-to-end machine learning and data pipelines. Use the latest Kubeflow Pipelines 2.0 SDK to build your data science pipeline in Python code. After you have built your pipeline, use the SDK to compile it into an Intermediate Representation (IR) YAML file. After defining the pipeline, you can import the YAML file to the OpenShift AI dashboard to enable you to configure its execution settings. You can also use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab. For more information about creating pipelines in JupyterLab, see Working with pipelines in JupyterLab . For more information about the Elyra JupyterLab extension, see Elyra Documentation . Additional resources Kubeflow Pipelines 2.0 Documentation Elyra Documentation 1.3. Importing a data science pipeline To help you begin working with data science pipelines in OpenShift AI, you can import a YAML file containing your pipeline's code to an active pipeline server, or you can import the YAML file from a URL. This file contains a Kubeflow pipeline compiled by using the Kubeflow compiler. After you have imported the pipeline to a pipeline server, you can execute the pipeline by creating a pipeline run. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a configured pipeline server. You have compiled your pipeline with the Kubeflow compiler and you have access to the resulting YAML file. If you are uploading your pipeline from a URL, the URL is publicly accessible. Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . On the Pipelines page, from the Project drop-down list, select the project that you want to import a pipeline to. Click Import pipeline . In the Import pipeline dialog, enter the details for the pipeline that you want to import. In the Pipeline name field, enter a name for the pipeline that you want to import. In the Pipeline description field, enter a description for the pipeline that want to import. Select where you want to import your pipeline from by performing one of the following actions: Select Upload a file to upload your pipeline from your local machine's file system. Import your pipeline by clicking Upload , or by dragging and dropping a file. Select Import by url to upload your pipeline from a URL, and then enter the URL into the text box. Click Import pipeline . Verification The pipeline that you imported appears on the Pipelines page and on the Pipelines tab on the project details page. 1.4. Deleting a data science pipeline If you no longer require access to your data science pipeline on the dashboard, you can delete it so that it does not appear on the Data Science Pipelines page. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. There are active pipelines available on the Pipelines page. The pipeline that you want to delete does not contain any pipeline versions. The pipeline that you want to delete does not contain any pipeline versions. For more information, see Deleting a pipeline version . Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline that you want to delete. Click the action menu ( ... ) beside the pipeline that you want to delete, and then click Delete pipeline . In the Delete pipeline dialog, enter the pipeline name in the text field to confirm that you intend to delete it. Click Delete pipeline . Verification The data science pipeline that you deleted no longer appears on the Pipelines page. 1.5. Deleting a pipeline server After you have finished running your data science pipelines, you can delete the pipeline server. Deleting a pipeline server automatically deletes all of its associated pipelines, pipeline versions, and runs. If your pipeline data is stored in a database, the database is also deleted along with its meta-data. In addition, after deleting a pipeline server, you cannot create new pipelines or pipeline runs until you create another pipeline server. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a pipeline server. Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline server that you want to delete. From the Pipeline server actions list, select Delete pipeline server . In the Delete pipeline server dialog, enter the name of the pipeline server in the text field to confirm that you intend to delete it. Click Delete . Verification Pipelines previously assigned to the deleted pipeline server no longer appear on the Pipelines page for the relevant data science project. Pipeline runs previously assigned to the deleted pipeline server no longer appear on the Runs page for the relevant data science project. 1.6. Viewing the details of a pipeline server You can view the details of pipeline servers configured in OpenShift AI, such as the pipeline's connection details and where its data is stored. Prerequisites You have logged in to Red Hat OpenShift AI. You have previously created a data science project that contains an active and available pipeline server. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline server that you want to view. From the Pipeline server actions list, select View pipeline server configuration . Verification You can view the pipeline server details in the View pipeline server dialog. 1.7. Viewing existing pipelines You can view the details of pipelines that you have imported to Red Hat OpenShift AI, such as the pipeline's last run, when it was created, the pipeline's executed runs, and details of any associated pipeline versions. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a pipeline server. You have imported a pipeline to an active pipeline server. Existing pipelines are available. Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . On the Pipelines page, from the Project drop-down list, select the project that contains the pipelines that you want to view. Optional: Click Expand ( ) on the row of a pipeline to view its pipeline versions. Verification A list of data science pipelines appears on the Pipelines page. 1.8. Overview of pipeline versions You can manage incremental changes to pipelines in OpenShift AI by using versioning. This allows you to develop and deploy pipelines iteratively, preserving a record of your changes. You can track and manage your changes on the OpenShift AI dashboard, allowing you to schedule and execute runs against all available versions of your pipeline. 1.9. Uploading a pipeline version You can upload a YAML file to an active pipeline server that contains the latest version of your pipeline, or you can upload the YAML file from a URL. The YAML file must consist of a Kubeflow pipeline compiled by using the Kubeflow compiler. After you upload a pipeline version to a pipeline server, you can execute it by creating a pipeline run. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a configured pipeline server. You have a pipeline version available and ready to upload. If you are uploading your pipeline version from a URL, the URL is publicly accessible. Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . On the Pipelines page, from the Project drop-down list, select the project that you want to upload a pipeline version to. Click the Import pipeline drop-down list, and then select Upload new version . In the Upload new version dialog, enter the details for the pipeline version that you are uploading. From the Pipeline list, select the pipeline that you want to upload your pipeline version to. In the Pipeline version name field, confirm the name for the pipeline version, and change it if necessary. In the Pipeline version description field, enter a description for the pipeline version. Select where you want to upload your pipeline version from by performing one of the following actions: Select Upload a file to upload your pipeline version from your local machine's file system. Import your pipeline version by clicking Upload , or by dragging and dropping a file. Select Import by url to upload your pipeline version from a URL, and then enter the URL into the text box. Click Upload . Verification The pipeline version that you uploaded is displayed on the Pipelines page. Click Expand ( ) on the row containing the pipeline to view its versions. The Version column on the row containing the pipeline version that you uploaded on the Pipelines page increments by one. 1.10. Deleting a pipeline version You can delete specific versions of a pipeline when you no longer require them. Deleting a default pipeline version automatically changes the default pipeline version to the most recent version. If no pipeline versions exist, the pipeline persists without a default version. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a pipeline server. You have imported a pipeline to an active pipeline server. Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . The Pipelines page opens. Delete the pipeline versions that you no longer require: To delete a single pipeline version: From the Project list, select the project that contains a version of a pipeline that you want to delete. On the row containing the pipeline, click Expand ( ). Click the action menu ( ... ) beside the project version that you want to delete, and then click Delete pipeline version . The Delete pipeline version dialog opens. Enter the name of the pipeline version in the text field to confirm that you intend to delete it. Click Delete . To delete multiple pipeline versions: On the row containing each pipeline version that you want to delete, select the checkbox. Click the action menu (...) to the Import pipeline drop-down list, and then select Delete from the list. Verification The pipeline version that you deleted no longer appears on the Pipelines page, or on the Pipelines tab for the data science project. 1.11. Viewing the details of a pipeline version You can view the details of a pipeline version that you have uploaded to Red Hat OpenShift AI, such as its graph and YAML code. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a pipeline server. You have a pipeline available on an active and available pipeline server. Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . The Pipelines page opens. From the Project drop-down list, select the project that contains the pipeline versions that you want to view details for. Click the pipeline name to view further details of its most recent version. The pipeline version details page opens, displaying the Graph , Summary , and Pipeline spec tabs. Alternatively, click Expand ( ) on the row containing the pipeline that you want to view versions for, and then click the pipeline version that you want to view the details of. The pipeline version details page opens, displaying the Graph , Summary , and Pipeline spec tabs. Verification On the pipeline version details page, you can view the pipeline graph, summary details, and YAML code. 1.12. Downloading a data science pipeline version To make further changes to a data science pipeline version that you previously uploaded to OpenShift AI, you can download pipeline version code from the user interface. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have previously created a data science project that is available and contains a configured pipeline server. You have created and imported a pipeline to an active pipeline server that is available to download. Procedure From the OpenShift AI dashboard, click Data Science Pipelines Pipelines . On the Pipelines page, from the Project drop-down list, select the project that contains the version that you want to download. Click Expand ( ) beside the pipeline that contains the version that you want to download. Click the pipeline version that you want to download. The pipeline version details page opens. Click the Pipeline spec tab, and then click the Download button ( ) to download the YAML file that contains the pipeline version code to your local machine. Verification The pipeline version code downloads to your browser's default directory for downloaded files. 1.13. Overview of data science pipelines caching OpenShift AI supports caching within data science pipelines to optimize execution times and improve resource efficiency. Using caching reduces redundant task execution by reusing results from runs with identical inputs. Caching is particularly beneficial for iterative tasks, where intermediate steps might not need to be repeated. Understanding caching can help you design more efficient pipelines and save time in model development. Caching operates by storing the outputs of successfully completed tasks and comparing the inputs of new tasks against previously cached ones. If a match is found, OpenShift AI reuses the cached results instead of re-executing the task, reducing computation time and resource usage. 1.13.1. Caching criteria For caching to be effective, the following criteria determines if a task can use previously cached results: Input data and parameters : If the input data and parameters for a task are unchanged from a run, cached results are eligible for reuse. Task code and configuration : Changes to the task code or configurations invalidate the cache to ensure that modifications are always reflected. Pipeline environment : Changes to the pipeline environment, such as dependency versions, also affect caching eligibility to maintain consistency. 1.13.2. Viewing cached steps in the OpenShift AI user interface Cached steps in pipelines are visually indicated in the user interface (UI): Tasks that use cached results display a green icon, helping you quickly identify which steps were cached. The Status field in the side panel displays Cached for cached tasks. The UI also includes information about when the task was previously executed, allowing for easy verification of cache usage. To confirm caching status for specific tasks, navigate to the pipeline details view in the UI, where all cached and non-cached tasks are indicated. When a pipeline task is cached, its execution logs are not available. This is because the task uses previously generated outputs, eliminating the need for re-execution. 1.13.3. Disabling caching for specific tasks or pipelines In OpenShift AI, caching is enabled by default, but there are situations where disabling caching for specific tasks or the entire pipeline is necessary. For example, tasks that rely on frequently updated data or unique computational needs might not benefit from caching. 1.13.3.1. Disabling caching for individual tasks To disable caching for a particular task, apply the set_caching_options method directly to the task in your pipeline code: task_name.set_caching_options(False) After applying this setting, OpenShift AI executes the task in all future pipeline runs, ignoring any cached results. Note You can re-enable caching for individual tasks by setting set_caching_options(True) . 1.13.3.2. Disabling caching for pipelines If necessary, you can disable caching for the entire pipeline during pipeline submission by setting the enable_caching parameter to False in your pipeline code. This setting ensures that no steps are cached during pipeline execution. The enable_caching parameter is available only when using the kfp.client to submit pipelines or start pipeline runs, such as the run_pipeline method. Example: pipeline_func(enable_caching=False) Caution When disabling caching at the pipeline level, all tasks are re-executed, potentially increasing compute time and resource usage. 1.13.4. Verification and troubleshooting After configuring caching settings, you can verify that caching behaves as expected by using one of the following methods: Checking the UI : Confirm cached steps by locating the steps with the green icon in the task list. Testing task re-runs : Disable caching on individual tasks or the pipeline and check for re-execution to verify cache bypassing. Validating inputs : Ensure the task inputs, parameters, and environment remain unchanged if caching applies. Additional resources Kubeflow caching documentation: Kubeflow Pipelines - Caching The kfp.client module documentation for the enable_caching parameter: Kubeflow Pipelines - Caching
[ "login api.<cluster_name>.<cluster_domain>:6443 --web", "get dscinitializations.dscinitialization.opendatahub.io default-dsci -o json | jq '.spec.trustedCABundle.customCABundle' > /tmp/my-custom-ca-bundles.crt", "cat us-east-1-bundle.pem >> /tmp/my-custom-ca-bundles.crt", "patch dscinitialization default-dsci --type='json' -p='[{\"op\":\"replace\",\"path\":\"/spec/trustedCABundle/customCABundle\",\"value\":\"'\"USD(awk '{printf \"%s\\\\n\", USD0}' /tmp/my-custom-ca-bundles.crt)\"'\"}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_data_science_pipelines/managing-data-science-pipelines_ds-pipelines
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_set_up_sso_with_kerberos/making-open-source-more-inclusive
Chapter 9. Configuring the database
Chapter 9. Configuring the database This chapter explains how to configure the Red Hat build of Keycloak server to store data in a relational database. 9.1. Supported databases The server has built-in support for different databases. You can query the available databases by viewing the expected values for the db configuration option. The following table lists the supported databases and their tested versions. Database Option value Tested Version MariaDB Server mariadb 10.11 Microsoft SQL Server mssql 2022 MySQL mysql 8.0 Oracle Database oracle 19.3 PostgreSQL postgres 16 Amazon Aurora PostgreSQL postgres 16.1 By default, the server uses the dev-file database. This is the default database that the server will use to persist data and only exists for development use-cases. The dev-file database is not suitable for production use-cases, and must be replaced before deploying to production. 9.2. Installing a database driver Database drivers are shipped as part of Red Hat build of Keycloak except for the Oracle Database and Microsoft SQL Server drivers. Install the necessary missing driver manually if you want to connect to one of these databases or skip this section if you want to connect to a different database for which the database driver is already included. 9.2.1. Installing the Oracle Database driver To install the Oracle Database driver for Red Hat build of Keycloak: Download the ojdbc11 and orai18n JAR files from one of the following sources: Zipped JDBC driver and Companion Jars version 23.5.0.24.07 from the Oracle driver download page . Maven Central via ojdbc11 and orai18n . Installation media recommended by the database vendor for the specific database in use. When running the unzipped distribution: Place the ojdbc11 and orai18n JAR files in Red Hat build of Keycloak's providers folder When running containers: Build a custom Red Hat build of Keycloak image and add the JARs in the providers folder. When building a custom image for the Operator, those images need to be optimized images with all build-time options of Red Hat build of Keycloak set. A minimal Containerfile to build an image which can be used with the Red Hat build of Keycloak Operator and includes Oracle Database JDBC drivers downloaded from Maven Central looks like the following: FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc11/23.5.0.24.07/ojdbc11-23.5.0.24.07.jar /opt/keycloak/providers/ojdbc11.jar ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.5.0.24.07/orai18n-23.5.0.24.07.jar /opt/keycloak/providers/orai18n.jar # Setting the build parameter for the database: ENV KC_DB=oracle # Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true # To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build See the Running Red Hat build of Keycloak in a container chapter for details on how to build optimized images. Then continue configuring the database as described in the section. 9.2.2. Installing the Microsoft SQL Server driver To install the Microsoft SQL Server driver for Red Hat build of Keycloak: Download the mssql-jdbc JAR file from one of the following sources: Download a version from the Microsoft JDBC Driver for SQL Server page . Maven Central via mssql-jdbc . Installation media recommended by the database vendor for the specific database in use. When running the unzipped distribution: Place the mssql-jdbc in Red Hat build of Keycloak's providers folder When running containers: Build a custom Red Hat build of Keycloak image and add the JARs in the providers folder. When building a custom image for the Red Hat build of Keycloak Operator, those images need to be optimized images with all build-time options of Red Hat build of Keycloak set. A minimal Containerfile to build an image which can be used with the Red Hat build of Keycloak Operator and includes Microsoft SQL Server JDBC drivers downloaded from Maven Central looks like the following: FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.8.1.jre11/mssql-jdbc-12.8.1.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar # Setting the build parameter for the database: ENV KC_DB=mssql # Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true # To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build See the Running Red Hat build of Keycloak in a container chapter for details on how to build optimized images. Then continue configuring the database as described in the section. 9.3. Configuring a database For each supported database, the server provides some opinionated defaults to simplify database configuration. You complete the configuration by providing some key settings such as the database host and credentials. Start the server and set the basic options to configure a database bin/kc.[sh|bat] start --db postgres --db-url-host mypostgres --db-username myuser --db-password change_me This command includes the minimum settings needed to connect to the database. The default schema is keycloak , but you can change it by using the db-schema configuration option. Warning Do NOT use the --optimized flag for the start command if you want to use a particular DB (except the H2). Executing the build phase before starting the server instance is necessary. You can achieve it either by starting the instance without the --optimized flag, or by executing the build command before the optimized start. For more information, see Configuring Red Hat build of Keycloak . 9.4. Overriding default connection settings The server uses JDBC as the underlying technology to communicate with the database. If the default connection settings are insufficient, you can specify a JDBC URL using the db-url configuration option. The following is a sample command for a PostgreSQL database. bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase Be aware that you need to escape characters when invoking commands containing special shell characters such as ; using the CLI, so you might want to set it in the configuration file instead. 9.5. Overriding the default JDBC driver The server uses a default JDBC driver accordingly to the database you chose. To set a different driver you can set the db-driver with the fully qualified class name of the JDBC driver: bin/kc.[sh|bat] start --db postgres --db-driver=my.Driver Regardless of the driver you set, the default driver is always available at runtime. Only set this property if you really need to. For instance, when leveraging the capabilities from a JDBC Driver Wrapper for a specific cloud database service. 9.6. Configuring Unicode support for the database Unicode support for all fields depends on whether the database allows VARCHAR and CHAR fields to use the Unicode character set. If these fields can be set, Unicode is likely to work, usually at the expense of field length. If the database only supports Unicode in the NVARCHAR and NCHAR fields, Unicode support for all text fields is unlikely to work because the server schema uses VARCHAR and CHAR fields extensively. The database schema provides support for Unicode strings only for the following special fields: Realms : display name, HTML display name, localization texts (keys and values) Federation Providers: display name Users : username, given name, last name, attribute names and values Groups : name, attribute names and values Roles : name Descriptions of objects Otherwise, characters are limited to those contained in database encoding, which is often 8-bit. However, for some database systems, you can enable UTF-8 encoding of Unicode characters and use the full Unicode character set in all text fields. For a given database, this choice might result in a shorter maximum string length than the maximum string length supported by 8-bit encodings. 9.6.1. Configuring Unicode support for an Oracle database Unicode characters are supported in an Oracle database if the database was created with Unicode support in the VARCHAR and CHAR fields. For example, you configured AL32UTF8 as the database character set. In this case, the JDBC driver requires no special settings. If the database was not created with Unicode support, you need to configure the JDBC driver to support Unicode characters in the special fields. You configure two properties. Note that you can configure these properties as system properties or as connection properties. Set oracle.jdbc.defaultNChar to true . Optionally, set oracle.jdbc.convertNcharLiterals to true . Note For details on these properties and any performance implications, see the Oracle JDBC driver configuration documentation. 9.6.2. Unicode support for a Microsoft SQL Server database Unicode characters are supported only for the special fields for a Microsoft SQL Server database. The database requires no special settings. The sendStringParametersAsUnicode property of JDBC driver should be set to false to significantly improve performance. Without this parameter, the Microsoft SQL Server might be unable to use indexes. 9.6.3. Configuring Unicode support for a MySQL database Unicode characters are supported in a MySQL database if the database was created with Unicode support in the VARCHAR and CHAR fields when using the CREATE DATABASE command. Note that the utf8mb4 character set is not supported due to different storage requirements for the utf8 character set. See MySQL documentation for details. In that situation, the length restriction on non-special fields does not apply because columns are created to accommodate the number of characters, not bytes. If the database default character set does not allow Unicode storage, only the special fields allow storing Unicode values. Start MySQL Server. Under JDBC driver settings, locate the JDBC connection settings . Add this connection property: characterEncoding=UTF-8 9.6.4. Configuring Unicode support for a PostgreSQL database Unicode is supported for a PostgreSQL database when the database character set is UTF8. Unicode characters can be used in any field with no reduction of field length for non-special fields. The JDBC driver requires no special settings. The character set is determined when the PostgreSQL database is created. Check the default character set for a PostgreSQL cluster by entering the following SQL command. If the default character set is not UTF 8, create the database with the UTF8 as the default character set using a command such as: 9.7. Preparing for Amazon Aurora PostgreSQL When using Amazon Aurora PostgreSQL, the Amazon Web Services JDBC Driver offers additional features like transfer of database connections when a writer instance changes in a Multi-AZ setup. This driver is not part of the distribution and needs to be installed before it can be used. To install this driver, apply the following steps: When running the unzipped distribution: Download the JAR file from the Amazon Web Services JDBC Driver releases page and place it in Red Hat build of Keycloak's providers folder. When running containers: Build a custom Red Hat build of Keycloak image and add the JAR in the providers folder. A minimal Containerfile to build an image which can be used with the Red Hat build of Keycloak Operator looks like the following: FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chmod=0666 https://github.com/awslabs/aws-advanced-jdbc-wrapper/releases/download/2.3.1/aws-advanced-jdbc-wrapper-2.3.1.jar /opt/keycloak/providers/aws-advanced-jdbc-wrapper.jar See the Running Red Hat build of Keycloak in a container chapter for details on how to build optimized images, and the Using custom Red Hat build of Keycloak images chapter on how to run optimized and non-optimized images with the Red Hat build of Keycloak Operator. Configure Red Hat build of Keycloak to run with the following parameters: db-url Insert aws-wrapper to the regular PostgreSQL JDBC URL resulting in a URL like jdbc:aws-wrapper:postgresql://... . db-driver Set to software.amazon.jdbc.Driver to use the AWS JDBC wrapper. 9.8. Preparing for MySQL server Beginning with MySQL 8.0.30, MySQL supports generated invisible primary keys for any InnoDB table that is created without an explicit primary key (more information here ). If this feature is enabled, the database schema initialization and also migrations will fail with the error message Multiple primary key defined (1068) . You then need to disable it by setting the parameter sql_generate_invisible_primary_key to OFF in your MySQL server configuration before installing or upgrading Red Hat build of Keycloak. 9.9. Changing database locking timeout in a cluster configuration Because cluster nodes can boot concurrently, they take extra time for database actions. For example, a booting server instance may perform some database migration, importing, or first time initializations. A database lock prevents start actions from conflicting with each other when cluster nodes boot up concurrently. The maximum timeout for this lock is 900 seconds. If a node waits on this lock for more than the timeout, the boot fails. The need to change the default value is unlikely, but you can change it by entering this command: bin/kc.[sh|bat] start --spi-dblock-jpa-lock-wait-timeout 900 9.10. Using Database Vendors with XA transaction support Red Hat build of Keycloak uses non-XA transactions and the appropriate database drivers by default. If you wish to use the XA transaction support offered by your driver, enter the following command: bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=true Red Hat build of Keycloak automatically chooses the appropriate JDBC driver for your vendor. Note Certain vendors, such as Azure SQL and MariaDB Galera, do not support or rely on the XA transaction mechanism. XA recovery defaults to enabled and will use the file system location KEYCLOAK_HOME/data/transaction-logs to store transaction logs. Note Enabling XA transactions in a containerized environment does not fully support XA recovery unless stable storage is available at that path. 9.11. Setting JPA provider configuration option for migrationStrategy To setup the JPA migrationStrategy (manual/update/validate) you should setup JPA provider as follows: Setting the migration-strategy for the quarkus provider of the connections-jpa SPI bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-strategy=manual If you want to get a SQL file for DB initialization, too, you have to add this additional SPI initializeEmpty (true/false): Setting the initialize-empty for the quarkus provider of the connections-jpa SPI bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-initialize-empty=false In the same way the migrationExport to point to a specific file and location: Setting the migration-export for the quarkus provider of the connections-jpa SPI bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-export=<path>/<file.sql> 9.12. Relevant options Value db 🛠 The database vendor. CLI: --db Env: KC_DB dev-file (default), dev-mem , mariadb , mssql , mysql , oracle , postgres db-driver 🛠 The fully qualified class name of the JDBC driver. If not set, a default driver is set accordingly to the chosen database. CLI: --db-driver Env: KC_DB_DRIVER db-password The password of the database user. CLI: --db-password Env: KC_DB_PASSWORD db-pool-initial-size The initial size of the connection pool. CLI: --db-pool-initial-size Env: KC_DB_POOL_INITIAL_SIZE db-pool-max-size The maximum size of the connection pool. CLI: --db-pool-max-size Env: KC_DB_POOL_MAX_SIZE 100 (default) db-pool-min-size The minimal size of the connection pool. CLI: --db-pool-min-size Env: KC_DB_POOL_MIN_SIZE db-schema The database schema to be used. CLI: --db-schema Env: KC_DB_SCHEMA db-url The full database JDBC URL. If not provided, a default URL is set based on the selected database vendor. For instance, if using postgres , the default JDBC URL would be jdbc:postgresql://localhost/keycloak . CLI: --db-url Env: KC_DB_URL db-url-database Sets the database name of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-database Env: KC_DB_URL_DATABASE db-url-host Sets the hostname of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-host Env: KC_DB_URL_HOST db-url-port Sets the port of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-port Env: KC_DB_URL_PORT db-url-properties Sets the properties of the default JDBC URL of the chosen vendor. Make sure to set the properties accordingly to the format expected by the database vendor, as well as appending the right character at the beginning of this property value. If the db-url option is set, this option is ignored. CLI: --db-url-properties Env: KC_DB_URL_PROPERTIES db-username The username of the database user. CLI: --db-username Env: KC_DB_USERNAME transaction-xa-enabled 🛠 If set to true, XA datasources will be used. CLI: --transaction-xa-enabled Env: KC_TRANSACTION_XA_ENABLED true , false (default)
[ "FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc11/23.5.0.24.07/ojdbc11-23.5.0.24.07.jar /opt/keycloak/providers/ojdbc11.jar ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.5.0.24.07/orai18n-23.5.0.24.07.jar /opt/keycloak/providers/orai18n.jar Setting the build parameter for the database: ENV KC_DB=oracle Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build", "FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.8.1.jre11/mssql-jdbc-12.8.1.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar Setting the build parameter for the database: ENV KC_DB=mssql Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build", "bin/kc.[sh|bat] start --db postgres --db-url-host mypostgres --db-username myuser --db-password change_me", "bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase", "bin/kc.[sh|bat] start --db postgres --db-driver=my.Driver", "show server_encoding;", "create database keycloak with encoding 'UTF8';", "FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chmod=0666 https://github.com/awslabs/aws-advanced-jdbc-wrapper/releases/download/2.3.1/aws-advanced-jdbc-wrapper-2.3.1.jar /opt/keycloak/providers/aws-advanced-jdbc-wrapper.jar", "bin/kc.[sh|bat] start --spi-dblock-jpa-lock-wait-timeout 900", "bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=true", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-strategy=manual", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-initialize-empty=false", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-export=<path>/<file.sql>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/db-
26.8. Sample Parameter File and CMS Configuration File
26.8. Sample Parameter File and CMS Configuration File To change the parameter file, begin by extending the shipped generic.prm file. Example of generic.prm file: Example of redhat.conf file configuring a QETH network device (pointed to by CMSCONFFILE in generic.prm ):
[ "root=\"/dev/ram0\" ro ip=\"off\" ramdisk_size=\"40000\" cio_ignore=\"all,!0.0.0009\" CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" vnc", "NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-Sample_files
Chapter 4. Updatable Views
Chapter 4. Updatable Views 4.1. Updatable Views Any view may be marked as updatable. In many circumstances the view definition may allow the view to be inherently updatable without the need to manually define handling of INSERT/UPDATE/DELETE operations. An inherently updatable view cannot be defined with a query that has: A set operation (INTERSECT, EXCEPT, UNION). SELECT DISTINCT Aggregation (aggregate functions, GROUP BY, HAVING) A LIMIT clause A UNION ALL can define an inherently updatable view only if each of the UNION branches is itself inherently updatable. A view defined by a UNION ALL can support inherent INSERTs if it is a partitioned union and the INSERT specifies values that belong to a single partition. Refer to Partitioned Union. Any view column that is not mapped directly to a column is not updatable and cannot be targeted by an UPDATE set clause or be an INSERT column. If a view is defined by a join query or has a WITH clause it may still be inherently updatable. However in these situations there are further restrictions and the resulting query plan may execute multiple statements. For a non-simple query to be updatable, it is required: An INSERT/UPDATE can only modify a single key-preserved table. To allow DELETE operations there must be only a single key-preserved table. If the default handling is not available or you wish to have an alternative implementation of an INSERT/UPDATE/DELETE, then you may use update procedures (see Section 2.10.6, "Update Procedures" ) to define procedures to handle the respective operations.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-updatable_views
Chapter 3. Installing the Migration Toolkit for Containers
Chapter 3. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 3.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 3.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 3.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your OpenShift Container Platform source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi9 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 3.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.13 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.13 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 3.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.13, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 3.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 3.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 3.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 3.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 3.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 3.4.2.1. NetworkPolicy configuration 3.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 3.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 3.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 3.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 3.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 3.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 3.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 3.4.4. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 3.4.4.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 3.4.4.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 3.4.4.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 3.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider. MTC supports the following storage providers: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 3.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 3.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint, which you need to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for MTC. Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 3.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC) . Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 3.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 3.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` Create a service principal with the Contributor role, assigning a specific --role and --scopes : USD AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP` The CLI generates a password for you. Ensure you capture the password. After creating the service principal, obtain the client id. USD AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>` Note For this to be successful, you must know your Azure application ID. Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 3.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 3.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc run test --image registry.redhat.io/ubi9 --command sleep infinity", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`", "AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`", "AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migration_toolkit_for_containers/installing-mtc
Chapter 13. Configuring seccomp profiles
Chapter 13. Configuring seccomp profiles An OpenShift Container Platform container or a pod runs a single application that performs one or more well-defined tasks. The application usually requires only a small subset of the underlying operating system kernel APIs. Secure computing mode, seccomp, is a Linux kernel feature that can be used to limit the process running in a container to only using a subset of the available system calls. The restricted-v2 SCC applies to all newly created pods in 4.16. The default seccomp profile runtime/default is applied to these pods. Seccomp profiles are stored as JSON files on the disk. Important Seccomp profiles cannot be applied to privileged containers. 13.1. Verifying the default seccomp profile applied to a pod OpenShift Container Platform ships with a default seccomp profile that is referenced as runtime/default . In 4.16, newly created pods have the Security Context Constraint (SCC) set to restricted-v2 and the default seccomp profile applies to the pod. Procedure You can verify the Security Context Constraint (SCC) and the default seccomp profile set on a pod by running the following commands: Verify what pods are running in the namespace: USD oc get pods -n <namespace> For example, to verify what pods are running in the workshop namespace run the following: USD oc get pods -n workshop Example output NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s Inspect the pods: USD oc get pod parksmap-1-4xkwf -n workshop -o yaml Example output apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2 1 The restricted-v2 SCC is added by default if your workload does not have access to a different SCC. 2 Newly created pods in 4.16 will have the seccomp profile configured to runtime/default as mandated by the SCC. 13.1.1. Upgraded cluster In clusters upgraded to 4.16 all authenticated users have access to the restricted and restricted-v2 SCC. A workload admitted by the SCC restricted for example, on a OpenShift Container Platform v4.10 cluster when upgraded may get admitted by restricted-v2 . This is because restricted-v2 is the more restrictive SCC between restricted and restricted-v2 . Note The workload must be able to run with retricted-v2 . Conversely with a workload that requires privilegeEscalation: true this workload will continue to have the restricted SCC available for any authenticated user. This is because restricted-v2 does not allow privilegeEscalation . 13.1.2. Newly installed cluster For newly installed OpenShift Container Platform 4.11 or later clusters, the restricted-v2 replaces the restricted SCC as an SCC that is available to be used by any authenticated user. A workload with privilegeEscalation: true , is not admitted into the cluster since restricted-v2 is the only SCC available for authenticated users by default. The feature privilegeEscalation is allowed by restricted but not by restricted-v2 . More features are denied by restricted-v2 than were allowed by restricted SCC. A workload with privilegeEscalation: true may be admitted into a newly installed OpenShift Container Platform 4.11 or later cluster. To give access to the restricted SCC to the ServiceAccount running the workload (or any other SCC that can admit this workload) using a RoleBinding run the following command: USD oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name> In OpenShift Container Platform 4.16 the ability to add the pod annotations seccomp.security.alpha.kubernetes.io/pod: runtime/default and container.seccomp.security.alpha.kubernetes.io/<container_name>: runtime/default is deprecated. 13.2. Configuring a custom seccomp profile You can configure a custom seccomp profile, which allows you to update the filters based on the application requirements. This allows cluster administrators to have greater control over the security of workloads running in OpenShift Container Platform. Seccomp security profiles list the system calls (syscalls) a process can make. Permissions are broader than SELinux, which restrict operations, such as write , system-wide. 13.2.1. Creating seccomp profiles You can use the MachineConfig object to create profiles. Seccomp can restrict system calls (syscalls) within a container, limiting the access of your application. Prerequisites You have cluster admin permissions. You have created a custom security context constraints (SCC). For more information, see Additional resources . Procedure Create the MachineConfig object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json 13.2.2. Setting up the custom seccomp profile Prerequisite You have cluster administrator permissions. You have created a custom security context constraints (SCC). For more information, see "Additional resources". You have created a custom seccomp profile. Procedure Upload your custom seccomp profile to /var/lib/kubelet/seccomp/<custom-name>.json by using the Machine Config. See "Additional resources" for detailed steps. Update the custom SCC by providing reference to the created custom seccomp profile: seccompProfiles: - localhost/<custom-name>.json 1 1 Provide the name of your custom seccomp profile. 13.2.3. Applying the custom seccomp profile to the workload Prerequisite The cluster administrator has set up the custom seccomp profile. For more details, see "Setting up the custom seccomp profile". Procedure Apply the seccomp profile to the workload by setting the securityContext.seccompProfile.type field as following: Example spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1 1 Provide the name of your custom seccomp profile. Alternatively, you can use the pod annotations seccomp.security.alpha.kubernetes.io/pod: localhost/<custom-name>.json . However, this method is deprecated in OpenShift Container Platform 4.16. During deployment, the admission controller validates the following: The annotations against the current SCCs allowed by the user role. The SCC, which includes the seccomp profile, is allowed for the pod. If the SCC is allowed for the pod, the kubelet runs the pod with the specified seccomp profile. Important Ensure that the seccomp profile is deployed to all worker nodes. Note The custom SCC must have the appropriate priority to be automatically assigned to the pod or meet other conditions required by the pod, such as allowing CAP_NET_ADMIN. 13.3. Additional resources Managing security context constraints Machine Config Overview
[ "oc get pods -n <namespace>", "oc get pods -n workshop", "NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s", "oc get pod parksmap-1-4xkwf -n workshop -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] openshift.io/deployment-config.latest-version: \"1\" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2", "oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json", "seccompProfiles: - localhost/<custom-name>.json 1", "spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/seccomp-profiles
Chapter 19. Creating cluster resources that are active on multiple nodes (cloned resources)
Chapter 19. Creating cluster resources that are active on multiple nodes (cloned resources) You can clone a cluster resource so that the resource can be active on multiple nodes. For example, you can use cloned resources to configure multiple instances of an IP resource to distribute throughout a cluster for node balancing. You can clone any resource provided the resource agent supports it. A clone consists of one resource or one resource group. Note Only resources that can be active on multiple nodes at the same time are suitable for cloning. For example, a Filesystem resource mounting a non-clustered file system such as ext4 from a shared memory device should not be cloned. Since the ext4 partition is not cluster aware, this file system is not suitable for read/write operations occurring from multiple nodes at the same time. 19.1. Creating and removing a cloned resource You can create a resource and a clone of that resource at the same time. To create a resource and clone of the resource with the following single command. You can set a custom name for the clone by specifying a value for the clone_id option. You cannot create a resource group and a clone of that resource group in a single command. Alternately, you can create a clone of a previously-created resource or resource group with the following command. By default, the name of the clone will be resource_id -clone or group_name -clone . You can set a custom name for the clone by specifying a value for the clone_id option. Note You need to configure resource configuration changes on one node only. Note When configuring constraints, always use the name of the group or clone. When you create a clone of a resource, by default the clone takes on the name of the resource with -clone appended to the name. The following command creates a resource of type apache named webfarm and a clone of that resource named webfarm-clone . Note When you create a resource or resource group clone that will be ordered after another clone, you should almost always set the interleave=true option. This ensures that copies of the dependent clone can stop or start when the clone it depends on has stopped or started on the same node. If you do not set this option, if a cloned resource B depends on a cloned resource A and a node leaves the cluster, when the node returns to the cluster and resource A starts on that node, then all of the copies of resource B on all of the nodes will restart. This is because when a dependent cloned resource does not have the interleave option set, all instances of that resource depend on any running instance of the resource it depends on. Use the following command to remove a clone of a resource or a resource group. This does not remove the resource or resource group itself. The following table describes the options you can specify for a cloned resource. Table 19.1. Resource Clone Options Field Description priority, target-role, is-managed Options inherited from resource that is being cloned, as described in the "Resource Meta Options" table in Configuring resource meta options . clone-max How many copies of the resource to start. Defaults to the number of nodes in the cluster. clone-node-max How many copies of the resource can be started on a single node; the default value is 1 . notify When stopping or starting a copy of the clone, tell all the other copies beforehand and when the action was successful. Allowed values: false , true . The default value is false . globally-unique Does each copy of the clone perform a different function? Allowed values: false , true If the value of this option is false , these resources behave identically everywhere they are running and thus there can be only one copy of the clone active per machine. If the value of this option is true , a copy of the clone running on one machine is not equivalent to another instance, whether that instance is running on another node or on the same node. The default value is true if the value of clone-node-max is greater than one; otherwise the default value is false . ordered Should the copies be started in series (instead of in parallel). Allowed values: false , true . The default value is false . interleave Changes the behavior of ordering constraints (between clones) so that copies of the first clone can start or stop as soon as the copy on the same node of the second clone has started or stopped (rather than waiting until every instance of the second clone has started or stopped). Allowed values: false , true . The default value is false . clone-min If a value is specified, any clones which are ordered after this clone will not be able to start until the specified number of instances of the original clone are running, even if the interleave option is set to true . To achieve a stable allocation pattern, clones are slightly sticky by default, which indicates that they have a slight preference for staying on the node where they are running. If no value for resource-stickiness is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving copies around the cluster. For information about setting the resource-stickiness resource meta-option, see Configuring resource meta options . 19.2. Configuring clone resource constraints In most cases, a clone will have a single copy on each active cluster node. You can, however, set clone-max for the resource clone to a value that is less than the total number of nodes in the cluster. If this is the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently to those for regular resources except that the clone's id must be used. The following command creates a location constraint for the cluster to preferentially assign resource clone webfarm-clone to node1 . Ordering constraints behave slightly differently for clones. In the example below, because the interleave clone option is left to default as false , no instance of webfarm-stats will start until all instances of webfarm-clone that need to be started have done so. Only if no copies of webfarm-clone can be started then webfarm-stats will be prevented from being active. Additionally, webfarm-clone will wait for webfarm-stats to be stopped before stopping itself. Colocation of a regular (or group) resource with a clone means that the resource can run on any machine with an active copy of the clone. The cluster will choose a copy based on where the clone is running and the resource's own location preferences. Colocation between clones is also possible. In such cases, the set of allowed locations for the clone is limited to nodes on which the clone is (or will be) active. Allocation is then performed as normally. The following command creates a colocation constraint to ensure that the resource webfarm-stats runs on the same node as an active copy of webfarm-clone . 19.3. Promotable clone resources Promotable clone resources are clone resources with the promotable meta attribute set to true . They allow the instances to be in one of two operating modes; these are called promoted and unpromoted . The names of the modes do not have specific meanings, except for the limitation that when an instance is started, it must come up in the Unpromoted state. Note: The Promoted and Unpromoted role names are the functional equivalent of the Master and Slave Pacemaker roles in RHEL releases. 19.3.1. Creating a promotable clone resource You can create a resource as a promotable clone with the following single command. By default, the name of the promotable clone will be resource_id -clone . You can set a custom name for the clone by specifying a value for the clone_id option. Alternately, you can create a promotable resource from a previously-created resource or resource group with the following command. By default, the name of the promotable clone will be resource_id -clone or group_name -clone . You can set a custom name for the clone by specifying a value for the clone_id option. The following table describes the extra clone options you can specify for a promotable resource. Table 19.2. Extra Clone Options Available for Promotable Clones Field Description promoted-max How many copies of the resource can be promoted; default 1. promoted-node-max How many copies of the resource can be promoted on a single node; default 1. 19.3.2. Configuring promotable resource constraints In most cases, a promotable resource will have a single copy on each active cluster node. If this is not the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently than those for regular resources. You can create a colocation constraint which specifies whether the resources are operating in a promoted or unpromoted role. The following command creates a resource colocation constraint. For information about colocation constraints, see Colocating cluster resources . When configuring an ordering constraint that includes promotable resources, one of the actions that you can specify for the resources is promote , indicating that the resource be promoted from unpromoted role to promoted role. Additionally, you can specify an action of demote , indicated that the resource be demoted from promoted role to unpromoted role. The command for configuring an order constraint is as follows. For information about resource order constraints, see Determining the order in which cluster resources are run . 19.4. Demoting a promoted resource on failure You can configure a promotable resource so that when a promote or monitor action fails for that resource, or the partition in which the resource is running loses quorum, the resource will be demoted but will not be fully stopped. This can prevent the need for manual intervention in situations where fully stopping the resource would require it. To configure a promotable resource to be demoted when a promote action fails, set the on-fail operation meta option to demote , as in the following example. To configure a promotable resource to be demoted when a monitor action fails, set interval to a nonzero value, set the on-fail operation meta option to demote , and set role to Promoted , as in the following example. To configure a cluster so that when a cluster partition loses quorum any promoted resources will be demoted but left running and all other resources will be stopped, set the no-quorum-policy cluster property to demote Setting the on-fail meta-attribute to demote for an operation does not affect how promotion of a resource is determined. If the affected node still has the highest promotion score, it will be selected to be promoted again.
[ "pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] [meta resource meta options ] clone [ clone_id ] [ clone options ]", "pcs resource clone resource_id | group_id [ clone_id ][ clone options ]", "pcs resource create webfarm apache clone", "pcs resource unclone resource_id | clone_id | group_name", "pcs constraint location webfarm-clone prefers node1", "pcs constraint order start webfarm-clone then webfarm-stats", "pcs constraint colocation add webfarm-stats with webfarm-clone", "pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] promotable [ clone_id ] [ clone options ]", "pcs resource promotable resource_id [ clone_id ] [ clone options ]", "pcs constraint colocation add [promoted|unpromoted] source_resource with [promoted|unpromoted] target_resource [ score ] [ options ]", "pcs constraint order [ action ] resource_id then [ action ] resource_id [ options ]", "pcs resource op add my-rsc promote on-fail=\"demote\"", "pcs resource op add my-rsc monitor interval=\"10s\" on-fail=\"demote\" role=\"Promoted\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_creating-multinode-resources-configuring-and-managing-high-availability-clusters
Chapter 123. Servlet
Chapter 123. Servlet Only consumer is supported The Servlet component provides HTTP based endpoints for consuming HTTP requests that arrive at a HTTP endpoint that is bound to a published Servlet. Note Stream Servlet is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . If you find a situation where the message body appears to be empty or you need to access the data multiple times (eg: doing multicasting, or redelivery error handling) you should use Stream caching or convert the message body to a String which is safe to be read multiple times. 123.1. Dependencies When using servlet with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-servlet-starter</artifactId> </dependency> 123.2. URI format 123.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 123.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 123.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 123.4. Component Options The Servlet component supports 11 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false boolean servletName (consumer) Default name of servlet to use. The default name is CamelServlet. CamelServlet String attachmentMultipartBinding (consumer (advanced)) Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlets. false boolean fileNameExtWhitelist (consumer (advanced)) Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String httpRegistry (consumer (advanced)) To use a custom org.apache.camel.component.servlet.HttpRegistry. HttpRegistry allowJavaSerializedObject (advanced) Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean httpBinding (advanced) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding httpConfiguration (advanced) To use the shared HttpConfiguration as base configuration. HttpConfiguration headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy 123.5. Endpoint Options The Servlet endpoint is configured using URI syntax: with the following path and query parameters: 123.5.1. Path Parameters (1 parameters) Name Description Default Type contextPath (consumer) Required The context-path to use. String 123.5.2. Query Parameters (22 parameters) Name Description Default Type chunked (consumer) If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response. true boolean disableStreamCache (common) Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If this option is set to true, then the producers will not cache the response body stream but use the response stream as-is as the message body. false boolean headerFilterStrategy (common) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy httpBinding (common (advanced)) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding async (consumer) Configure the consumer to work in async mode. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean httpMethodRestrict (consumer) Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false boolean responseBufferSize (consumer) To use a custom buffer size on the javax.servlet.ServletResponse. Integer servletName (consumer) Name of the servlet to use. CamelServlet String transferException (consumer) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was sent back serialized in the response as an application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean attachmentMultipartBinding (consumer (advanced)) Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlets. false boolean eagerCheckContentAvailable (consumer (advanced)) Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern fileNameExtWhitelist (consumer (advanced)) Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String mapHttpMessageBody (consumer (advanced)) If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping. true boolean mapHttpMessageFormUrlEncodedBody (consumer (advanced)) If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping. true boolean mapHttpMessageHeaders (consumer (advanced)) If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping. true boolean optionsEnabled (consumer (advanced)) Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off. false boolean traceEnabled (consumer (advanced)) Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off. false boolean 123.6. Message Headers Camel will apply the same Message Headers as the HTTP component. Camel will also populate all request.parameter and request.headers . For example, if a client request has the URL, http://myserver/myserver?orderid=123 , the exchange will contain a header named orderid with the value 123. 123.7. Usage You can consume only from endpoints generated by the Servlet component. Therefore, it should be used only as input into your Camel routes. To issue HTTP requests against other HTTP endpoints, use the HTTP component. 123.8. Spring Boot Auto-Configuration The component supports 15 options, which are listed below. Name Description Default Type camel.component.servlet.allow-java-serialized-object Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean camel.component.servlet.attachment-multipart-binding Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlet's. false Boolean camel.component.servlet.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.servlet.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.servlet.enabled Whether to enable auto configuration of the servlet component. This is enabled by default. Boolean camel.component.servlet.file-name-ext-whitelist Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String camel.component.servlet.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.servlet.http-binding To use a custom HttpBinding to control the mapping between Camel message and HttpClient. The option is a org.apache.camel.http.common.HttpBinding type. HttpBinding camel.component.servlet.http-configuration To use the shared HttpConfiguration as base configuration. The option is a org.apache.camel.http.common.HttpConfiguration type. HttpConfiguration camel.component.servlet.http-registry To use a custom org.apache.camel.component.servlet.HttpRegistry. The option is a org.apache.camel.http.common.HttpRegistry type. HttpRegistry camel.component.servlet.mute-exception If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false Boolean camel.component.servlet.servlet-name Default name of servlet to use. The default name is CamelServlet. CamelServlet String camel.servlet.mapping.context-path Context path used by the servlet component for automatic mapping. /camel/* String camel.servlet.mapping.enabled Enables the automatic mapping of the servlet component into the Spring web context. true Boolean camel.servlet.mapping.servlet-name The name of the Camel servlet. CamelServlet String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-servlet-starter</artifactId> </dependency>", "servlet://relative_path[?options]", "servlet:contextPath" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-servlet-component-starter
Chapter 2. Dynamically provisioned OpenShift Data Foundation deployed on VMware
Chapter 2. Dynamically provisioned OpenShift Data Foundation deployed on VMware 2.1. Replacing operational or failed storage devices on VMware infrastructure Create a new Persistent Volume Claim (PVC) on a new volume, and remove the old object storage device (OSD) when one or more virtual machine disks (VMDK) needs to be replaced in OpenShift Data Foundation which is deployed dynamically on VMware infrastructure. Prerequisites Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note The status of the pod is Running , if the OSD you want to replace is healthy. Scale down the OSD deployment for the OSD to be replaced. Each time you want to replace the OSD, update the osd_id_to_remove parameter with the OSD ID, and repeat this step. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Delete any old ocs-osd-removal jobs. Example output: Note If the above job does not reach Completed state after 10 minutes, then the job must be deleted and rerun with FORCE_OSD_REMOVAL=true . Navigate to the openshift-storage project. Remove the old OSD from the cluster. The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job pod fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get the PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find a relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and view the storage dashboard.
[ "oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide", "rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>", "osd_id_to_remove=0", "oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0", "deployment.extensions/rook-ceph-osd-0 scaled", "oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}", "No resources found.", "oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0", "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'", "2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"", "oc debug node/ <node name>", "chroot /host", "dmsetup ls| grep <pvc name>", "ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)", "cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get -n openshift-storage pods -l app=rook-ceph-osd", "rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h", "oc get -n openshift-storage pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-2s6w4 Bound pvc-7c9bcaf7-de68-40e1-95f9-0b0d7c0ae2fc 512Gi RWO thin 5m ocs-deviceset-1-0-q8fwh Bound pvc-9e7e00cb-6b33-402e-9dc5-b8df4fd9010f 512Gi RWO thin 1d20h ocs-deviceset-2-0-9v8lq Bound pvc-38cdfcee-ea7e-42a5-a6e1-aaa6d4924291 512Gi RWO thin 1d20h", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node name>", "chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_vmware
Chapter 3. Installing a cluster on VMC with customizations
Chapter 3. Installing a cluster on VMC with customizations In OpenShift Container Platform version 4.12, you can install a cluster on your VMware vSphere instance using installer-provisioned infrastructure by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. To customize the OpenShift Container Platform installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 3.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 3.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 3.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 3.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important Installing a cluster on VMware vSphere versions 7.0 and 7.0 Update 1 is deprecated. These versions are still fully supported, but all vSphere 6.x versions are no longer supported. Version 4.12 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. To update the hardware version for your vSphere virtual machines, see the "Updating hardware on nodes running in vSphere" article in the Updating clusters section. Table 3.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 or later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 3.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 3.6. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 3.7. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 3.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 3.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 3.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 3.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.12 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 3.10. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 3.11. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Table 3.7. Example of a configuration with multiple vSphere datacenters that run in a single VMware vCenter Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b 3.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.12.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.12.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.8. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.12.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 3.9. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 3.12.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.10. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 3.12.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.11. Additional VMware vSphere cluster parameters Parameter Description Values vCenter The fully-qualified hostname or IP address of the vCenter server. String username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String password The password for the vCenter user name. String datacenter The name of the data center to use in the vCenter instance. String defaultDatastore The name of the default datastore to use for provisioning volumes. String folder Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . resourcePool Optional. The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String apiVIPs The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . ingressVIPs The virtual IP (VIP) address that you configured for cluster ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . diskType Optional. The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . 3.12.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.12. Optional VMware vSphere machine pool parameters Parameter Description Values clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . osDisk.diskSizeGB The size of the disk in gigabytes. Integer cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer memoryMB The size of a virtual machine's memory in megabytes. Integer 3.12.1.6. Region and zone enablement configuration parameters To use the region and zone enablement feature, you must specify region and zone enablement parameters in your installation file. Important Before you modify the install-config.yaml file to configure a region and zone enablement environment, read the "VMware vSphere region and zone enablement" and the "Configuring regions and zones for a VMware vCenter" sections. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.13. Region and zone enablement parameters Parameter Description Values failureDomains Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. String failureDomains.name The name of the failure domain. The machine pools use this name to reference the failure domain. String failureDomains.server Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String failureDomains.region You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. String failureDomains.zone You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. String failureDomains.topology.computeCluster This parameter defines the compute cluster associated with the failure domain. If you do not define this parameter in your configuration, the compute cluster takes the value of platform.vsphere.cluster and platform.vsphere.datacenter . String failureDomains.topology.folder The absolute path of an existing folder where the installation program creates the virtual machines. If you do not define this parameter in your configuration, the folder takes the value of platform.vsphere.folder . String failureDomains.topology.datacenter Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. If you do not define this parameter in your configuration, the datacenter defaults to platform.vsphere.datacenter . String failureDomains.topology.datastore Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String failureDomains.topology.networks Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter in your configuration, the network takes the value of platform.vsphere.network . String failureDomains.topology.resourcePool Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String 3.12.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIPs: - api_vip ingressVIPs: - ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 5 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 6 The cluster name that you specified in your DNS records. 7 Optional: Provide an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 8 The vSphere disk provisioning method. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 3.12.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.12.4. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file to deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important The example uses the govc command. The govc command is an open source command available from VMware. The govc command is not available from Red Hat. Red Hat Support does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Note You cannot change a failure domain after you installed an OpenShift Container Platform cluster on the VMware vSphere platform. You can add additional failure domains after cluster installation. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" controlPlane: name: master replicas: 3 vsphere: zones: 3 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 9 cluster: cluster 10 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: "/<datacenter1>/host/<cluster1>" 18 resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" 19 networks: 20 - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" # ... 1 You must define set the TechPreviewNoUpgrade as the value for this parameter, so that you can use the VMware vSphere region and zone enablement feature. 2 3 An optional parameter for specifying a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. If you do not define this parameter, nodes will be distributed among all defined failure-domains. 4 5 6 7 8 9 10 11 The default vCenter topology. The installation program uses this topology information to deploy the bootstrap node. Additionally, the topology defines the default datastore for vSphere persistent volumes. 12 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. If you do not define this parameter, the installation program uses the default vCenter topology. 13 Defines the name of the failure domain. Each failure domain is referenced in the zones parameter to scope a machine pool to the failure domain. 14 You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. 15 You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. 16 Specifies the vCenter resources associated with the failure domain. 17 An optional parameter for defining the vSphere datacenter that is associated with a failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 18 An optional parameter for stating the absolute file path for the compute cluster that is associated with the failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 19 An optional parameter for the installer-provisioned infrastructure. The parameter sets the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources . 20 An optional parameter that lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter, the installation program uses the default vCenter topology. 21 An optional parameter for specifying a datastore to use for provisioning volumes. If you do not define this parameter, the installation program uses the default vCenter topology. 3.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. When you have configured your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host that is co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.14. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.16. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 3.16.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.16.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 3.16.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.17. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 3.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.19. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 3.19.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 3.20. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIPs: - api_vip ingressVIPs: - ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_vmc/installing-vmc-customizations
Chapter 15. OpenShift SDN default CNI network provider
Chapter 15. OpenShift SDN default CNI network provider 15.1. About the OpenShift SDN default CNI network provider OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by the OpenShift SDN, which configures an overlay network using Open vSwitch (OVS). 15.1.1. OpenShift SDN network isolation modes OpenShift SDN provides three SDN modes for configuring the pod network: Network policy mode allows project administrators to configure their own isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.9. Multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services. Subnet mode provides a flat pod network where every pod can communicate with every other pod and service. The network policy mode provides the same functionality as subnet mode. 15.1.2. Supported default CNI network provider feature matrix OpenShift Container Platform offers two supported choices, OpenShift SDN and OVN-Kubernetes, for the default Container Network Interface (CNI) network provider. The following table summarizes the current feature support for both network providers: Table 15.1. Default CNI network provider feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall [1] Supported Supported Egress router Supported Supported [2] IPsec encryption Not supported Supported IPv6 Not supported Supported [3] Kubernetes network policy Partially supported [4] Supported Kubernetes network policy logs Not supported Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 is supported only on bare metal clusters. Network policy for OpenShift SDN does not support egress rules and some ipBlock rules. 15.2. Configuring egress IPs for a project As a cluster administrator, you can configure the OpenShift SDN default Container Network Interface (CNI) network provider to assign one or more egress IP addresses to a project. 15.2.1. Egress IP address assignment for project egress traffic By configuring an egress IP address for a project, all outgoing external connections from the specified project will share the same, fixed source IP address. External resources can recognize traffic from a particular project based on the egress IP address. An egress IP address assigned to a project is different from the egress router, which is used to send traffic to specific destinations. Egress IP addresses are implemented as additional IP addresses on the primary network interface of the node and must be in the same subnet as the node's primary IP address. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . Egress IPs on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure are supported only on OpenShift Container Platform version 4.10 and later. Allowing additional IP addresses on the primary network interface might require extra configuration when using some cloud or virtual machines solutions. You can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace object. After an egress IP is associated with a project, OpenShift SDN allows you to assign egress IPs to hosts in two ways: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP address is assigned to a node. Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If the egressIPs parameter is set on a NetNamespace object, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped. High availability of nodes is automatic. If a node that hosts an egress IP address is unreachable and there are nodes that are able to host that egress IP address, then the egress IP address will move to a new node. When the unreachable node comes back online, the egress IP address automatically moves to balance egress IP addresses across nodes. Important The following limitations apply when using egress IP addresses with the OpenShift SDN cluster network provider: You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes. If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment. You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation. If you need to share IP addresses across namespaces, the OVN-Kubernetes cluster network provider egress IP address implementation allows you to span IP addresses across multiple namespaces. Note If you use OpenShift SDN in multitenant mode, you cannot use egress IP addresses with any namespace that is joined to another namespace by the projects that are associated with them. For example, if project1 and project2 are joined by running the oc adm pod-network join-projects --to=project1 project2 command, neither project can use an egress IP address. For more information, see BZ#1645577 . 15.2.1.1. Considerations when using automatically assigned egress IP addresses When using the automatic assignment approach for egress IP addresses the following considerations apply: You set the egressCIDRs parameter of each node's HostSubnet resource to indicate the range of egress IP addresses that can be hosted by a node. OpenShift Container Platform sets the egressIPs parameter of the HostSubnet resource based on the IP address range you specify. If the node hosting the namespace's egress IP address is unreachable, OpenShift Container Platform will reassign the egress IP address to another node with a compatible egress IP address range. The automatic assignment approach works best for clusters installed in environments with flexibility in associating additional IP addresses with nodes. 15.2.1.2. Considerations when using manually assigned egress IP addresses This approach is used for clusters where there can be limitations on associating additional IP addresses with nodes such as in public cloud environments. When using the manual assignment approach for egress IP addresses the following considerations apply: You set the egressIPs parameter of each node's HostSubnet resource to indicate the IP addresses that can be hosted by a node. Multiple egress IP addresses per namespace are supported. If a namespace has multiple egress IP addresses and those addresses are hosted on multiple nodes, the following additional considerations apply: If a pod is on a node that is hosting an egress IP address, that pod always uses the egress IP address on the node. If a pod is not on a node that is hosting an egress IP address, that pod uses an egress IP address at random. 15.2.2. Configuring automatically assigned egress IP addresses for a namespace In OpenShift Container Platform you can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object with the egress IP address using the following JSON: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign project1 to an IP address of 192.168.1.100 and project2 to an IP address of 192.168.1.101: USD oc patch netnamespace project1 --type=merge -p \ '{"egressIPs": ["192.168.1.100"]}' USD oc patch netnamespace project2 --type=merge -p \ '{"egressIPs": ["192.168.1.101"]}' Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Indicate which nodes can host egress IP addresses by setting the egressCIDRs parameter for each host using the following JSON: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressCIDRs": [ "<ip_address_range>", "<ip_address_range>" ] }' where: <node_name> Specifies a node name. <ip_address_range> Specifies an IP address range in CIDR format. You can specify more than one address range for the egressCIDRs array. For example, to set node1 and node2 to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255: USD oc patch hostsubnet node1 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' USD oc patch hostsubnet node2 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' OpenShift Container Platform automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to node1 and the egress IP address 192.168.1.101 to node2 or vice versa. 15.2.3. Configuring manually assigned egress IP addresses for a namespace In OpenShift Container Platform you can associate one or more egress IP addresses with a namespace. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object by specifying the following JSON object with the desired IP addresses: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign the project1 project to the IP addresses 192.168.1.100 and 192.168.1.101 : USD oc patch netnamespace project1 --type=merge \ -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}' To provide high availability, set the egressIPs value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally. Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Manually assign the egress IP to the node hosts. Set the egressIPs parameter on the HostSubnet object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>", "<ip_address>" ] }' where: <node_name> Specifies a node name. <ip_address> Specifies an IP address. You can specify more than one IP address for the egressIPs array. For example, to specify that node1 should have the egress IPs 192.168.1.100 , 192.168.1.101 , and 192.168.1.102 : USD oc patch hostsubnet node1 --type=merge -p \ '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' In the example, all egress traffic for project1 will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address. 15.3. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 15.3.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressNetworkPolicy custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. To ensure that pods can continue to access the OpenShift Container Platform API servers, you must include the IP address range that the API servers listen on in your egress firewall rules, as in the following example: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Important You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall. If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects. Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 15.3.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressNetworkPolicy object. A maximum of one EgressNetworkPolicy object with a maximum of 1,000 rules can be defined per project. The default project cannot use an egress firewall. When using the OpenShift SDN default Container Network Interface (CNI) network provider in multitenant mode, the following limitations apply: Global projects cannot use an egress firewall. You can make a project global by using the oc adm pod-network make-projects-global command. Projects merged by using the oc adm pod-network join-projects command cannot use an egress firewall in any of the joined projects. Violating any of these restrictions results in a broken egress firewall for the project, and might cause all external network traffic to be dropped. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 15.3.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 15.3.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 seconds. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL that is less than 30 seconds, the controller sets the duration to the returned value. If the TTL in the response is greater than 30 minutes, the controller sets the duration to 30 minutes. If the TTL is between 30 seconds and 30 minutes, the controller ignores the value and sets the duration to 30 seconds. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressNetworkPolicy objects is only recommended for domains with infrequent IP address changes. Note The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution. If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server's IP addresses. if you are using domain names in your pods. 15.3.2. EgressNetworkPolicy custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressNetworkPolicy CR object: EgressNetworkPolicy object apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2 ... 1 A name for your egress firewall policy. 2 A collection of one or more egress network policy rules as described in the following section. 15.3.2.1. EgressNetworkPolicy rules The following YAML describes an egress firewall rule object. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule. A value for either the cidrSelector field or the dnsName field for the rule. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A domain name. 15.3.2.2. Example EgressNetworkPolicy CR objects The following example defines several egress firewall policy rules: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. 15.3.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressNetworkPolicy object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OpenShift SDN default Container Network Interface (CNI) network provider plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressNetworkPolicy object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressnetworkpolicy.network.openshift.io/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 15.4. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 15.4.1. Viewing an EgressNetworkPolicy object You can view an EgressNetworkPolicy object in your cluster. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressNetworkPolicy objects defined in your cluster, enter the following command: USD oc get egressnetworkpolicy --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressnetworkpolicy <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 15.5. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 15.5.1. Editing an EgressNetworkPolicy object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Optional: If you did not save a copy of the EgressNetworkPolicy object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressNetworkPolicy object. Replace <filename> with the name of the file containing the updated EgressNetworkPolicy object. USD oc replace -f <filename>.yaml 15.6. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 15.6.1. Removing an EgressNetworkPolicy object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OpenShift SDN default Container Network Interface (CNI) network provider plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Enter the following command to delete the EgressNetworkPolicy object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressnetworkpolicy <name> 15.7. Considerations for the use of an egress router pod 15.7.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 15.7.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP. In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. In DNS proxy mode , an egress router pod runs as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. To make use of the reserved, source IP address, client pods must be modified to connect to the egress router pod rather than connecting directly to the destination IP address. This modification ensures that external destinations treat traffic as though it were coming from a known source. Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses or domain names, use DNS proxy mode. 15.7.1.2. Egress router pod implementation The egress router pod setup is performed by an initialization container. That container runs in a privileged context so that it can configure the macvlan interface and set up iptables rules. After the initialization container finishes setting up the iptables rules, it exits. the egress router pod executes the container to handle the egress router traffic. The image used varies depending on the egress router mode. The environment variables determine which addresses the egress-router image uses. The image configures the macvlan interface to use EGRESS_SOURCE as its IP address, with EGRESS_GATEWAY as the IP address for the gateway. Network Address Translation (NAT) rules are set up so that connections to the cluster IP address of the pod on any TCP or UDP port are redirected to the same port on IP address specified by the EGRESS_DESTINATION variable. If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName or nodeSelector to identify which nodes are acceptable. 15.7.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> Red Hat Virtualization (RHV) If you are using RHV , you must select No Network Filter for the Virtual network interface controller (vNIC). VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 15.7.1.4. Failover configuration To avoid downtime, you can deploy an egress router pod with a Deployment resource, as in the following example. To create a new Service object for the example deployment, use the oc expose deployment/egress-demo-controller command. apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: "true" spec: 2 initContainers: ... containers: ... 1 Ensure that replicas is set to 1 , because only one pod can use a given egress source IP address at any time. This means that only a single copy of the router runs on a node. 2 Specify the Pod object template for the egress router pod. 15.7.2. Additional resources Deploying an egress router in redirection mode Deploying an egress router in HTTP proxy mode Deploying an egress router in DNS proxy mode 15.8. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod that is configured to redirect traffic to specified destination IP addresses. 15.8.1. Egress router pod specification for redirect mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in redirect mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 External server to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25 , with a source IP address of 192.168.12.99 . Example egress router pod specification apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 15.8.2. Egress destination configuration format When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: <port> <protocol> <ip_address> - Incoming connections to the given <port> should be redirected to the same port on the given <ip_address> . <protocol> is either tcp or udp . <port> <protocol> <ip_address> <remote_port> - As above, except that the connection is redirected to a different <remote_port> on <ip_address> . <ip_address> - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. In the example that follows several rules are defined: The first line redirects traffic from local port 80 to port 80 on 203.0.113.25 . The second and third lines redirect local ports 8080 and 8443 to remote ports 80 and 443 on 203.0.113.26 . The last line matches traffic for any ports not specified in the rules. Example configuration 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 15.8.3. Deploying an egress router pod in redirect mode In redirect mode , an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1 Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address. 15.8.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 15.9. Deploying an egress router pod in HTTP proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified HTTP and HTTPS-based services. 15.9.1. Egress router pod specification for HTTP mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in HTTP mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |- ... ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. 15.9.2. Egress destination configuration format When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: An IP address allows connections to that IP address, such as 192.168.1.1 . A CIDR range allows connections to that CIDR range, such as 192.168.1.0/24 . A hostname allows proxying to that host, such as www.example.com . A domain name preceded by *. allows proxying to that domain and all of its subdomains, such as *.example.com . A ! followed by any of the match expressions denies the connection instead. If the last line is * , then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. You can also use * to allow connections to all remote destinations. Example configuration !*.example.com !192.168.1.0/24 192.168.2.1 * 15.9.3. Deploying an egress router pod in HTTP proxy mode In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1 1 Ensure the http port is set to 8080 . To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the http_proxy or https_proxy variables: apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/ ... 1 The service created in the step. Note Using the http_proxy and https_proxy environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. 15.9.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 15.10. Deploying an egress router pod in DNS proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified DNS names and IP addresses. 15.10.1. Egress router pod specification for DNS mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in DNS mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- ... - name: EGRESS_DNS_PROXY_DEBUG 5 value: "1" ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 Specify a list of one or more proxy destinations. 5 Optional: Specify to output the DNS proxy log output to stdout . 15.10.2. Egress destination configuration format When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. An egress router pod supports the following formats for specifying port and destination mappings: Port and remote address You can specify a source port and a destination host by using the two field format: <port> <remote_address> . The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. Port and remote address pair example 80 172.16.12.11 100 example.com Port, remote address, and remote port You can specify a source port, a destination host, and a destination port by using the three field format: <port> <remote_address> <remote_port> . The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. Port, remote address, and remote port example 8080 192.168.60.252 80 8443 web.example.com 443 15.10.3. Deploying an egress router pod in DNS proxy mode In DNS proxy mode , an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. Create a service for the egress router pod: Create a file named egress-router-service.yaml that contains the following YAML. Set spec.ports to the list of ports that you defined previously for the EGRESS_DNS_PROXY_DESTINATION environment variable. apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: ... type: ClusterIP selector: name: egress-dns-proxy For example: apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy To create the service, enter the following command: USD oc create -f egress-router-service.yaml Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. 15.10.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 15.11. Configuring an egress router pod destination list from a config map As a cluster administrator, you can define a ConfigMap object that specifies destination mappings for an egress router pod. The specific format of the configuration depends on the type of egress router pod. For details on the format, refer to the documentation for the specific egress router pod. 15.11.1. Configuring an egress router destination mappings with a config map For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. An advantage of this approach is that permission to edit the config map can be delegated to users without cluster-admin privileges. Because the egress router pod requires a privileged container, it is not possible for users without cluster-admin privileges to edit the pod definition directly. Note The egress router pod does not automatically update when the config map changes. You must restart the egress router pod to get updates. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file containing the mapping data for the egress router pod, as in the following example: You can put blank lines and comments into this file. Create a ConfigMap object from the file: USD oc delete configmap egress-routes --ignore-not-found USD oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt In the command, the egress-routes value is the name of the ConfigMap object to create and my-egress-destination.txt is the name of the file that the data is read from. Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27 Create an egress router pod definition and specify the configMapKeyRef stanza for the EGRESS_DESTINATION field in the environment stanza: ... env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination ... 15.11.2. Additional resources Redirect mode HTTP proxy mode DNS proxy mode 15.12. Enabling multicast for a project 15.12.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OpenShift SDN default Container Network Interface (CNI) network provider, you can enable multicast on a per-project basis. When using the OpenShift SDN network plugin in networkpolicy isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plugin in multitenant isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project. Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project. 15.12.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 15.13. Disabling multicast for a project 15.13.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate netnamespace <namespace> \ 1 netnamespace.network.openshift.io/multicast-enabled- 1 The namespace for the project you want to disable multicast for. 15.14. Configuring network isolation using OpenShift SDN When your cluster is configured to use the multitenant isolation mode for the OpenShift SDN CNI plugin, each project is isolated by default. Network traffic is not allowed between pods or services in different projects in multitenant isolation mode. You can change the behavior of multitenant isolation for a project in two ways: You can join one or more projects, allowing network traffic between pods and services in different projects. You can disable network isolation for a project. It will be globally accessible, accepting network traffic from pods and services in all other projects. A globally accessible project can access pods and services in all other projects. 15.14.1. Prerequisites You must have a cluster configured to use the OpenShift SDN Container Network Interface (CNI) plugin in multitenant isolation mode. 15.14.2. Joining projects You can join two or more projects to allow network traffic between pods and services in different projects. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Use the following command to join projects to an existing project network: USD oc adm pod-network join-projects --to=<project1> <project2> <project3> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. Optional: Run the following command to view the pod networks that you have joined together: USD oc get netnamespaces Projects in the same pod-network have the same network ID in the NETID column. 15.14.3. Isolating a project You can isolate a project so that pods and services in other projects cannot access its pods and services. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure To isolate the projects in the cluster, run the following command: USD oc adm pod-network isolate-projects <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 15.14.4. Disabling network isolation for a project You can disable network isolation for a project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command for the project: USD oc adm pod-network make-projects-global <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 15.15. Configuring kube-proxy The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services. 15.15.1. About iptables rules synchronization The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node. A sync begins when either of the following events occurs: An event occurs, such as service or endpoint is added to or removed from the cluster. The time since the last sync exceeds the sync period defined for kube-proxy. 15.15.2. kube-proxy configuration parameters You can modify the following kubeProxyConfig parameters. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. Table 15.2. Parameters Parameter Description Values Default iptablesSyncPeriod The refresh period for iptables rules. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package documentation. 30s proxyArguments.iptables-min-sync-period The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. By default, a refresh starts as soon as a change that affects iptables rules occurs. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package 0s 15.15.3. Modifying the kube-proxy configuration You can modify the Kubernetes network proxy configuration for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to a running cluster with the cluster-admin role. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the kubeProxyConfig parameter in the CR with your changes to the kube-proxy configuration, such as in the following example CR: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: ["30s"] Save the file and exit the text editor. The syntax is validated by the oc command when you save the file and exit the editor. If your modifications contain a syntax error, the editor opens the file and displays an error message. Enter the following command to confirm the configuration update: USD oc get networks.operator.openshift.io -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List Optional: Enter the following command to confirm that the Cluster Network Operator accepted the configuration change: USD oc get clusteroperator network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m The AVAILABLE field is True when the configuration update is applied successfully.
[ "oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p '{ \"egressCIDRs\": [ \"<ip_address_range>\", \"<ip_address_range>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'", "oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\",\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\", \"<ip_address>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressnetworkpolicy.network.openshift.io/v1 created", "oc get egressnetworkpolicy --all-namespaces", "oc describe egressnetworkpolicy <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressnetworkpolicy", "oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressnetworkpolicy", "oc delete -n <project> egressnetworkpolicy <name>", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-", "!*.example.com !192.168.1.0/24 192.168.2.1 *", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"", "80 172.16.12.11 100 example.com", "8080 192.168.60.252 80 8443 web.example.com 443", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy", "oc create -f egress-router-service.yaml", "Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27", "oc delete configmap egress-routes --ignore-not-found", "oc create configmap egress-routes --from-file=destination=my-egress-destination.txt", "apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27", "env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination", "oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-", "oc adm pod-network join-projects --to=<project1> <project2> <project3>", "oc get netnamespaces", "oc adm pod-network isolate-projects <project1> <project2>", "oc adm pod-network make-projects-global <project1> <project2>", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]", "oc get networks.operator.openshift.io -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List", "oc get clusteroperator network", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/openshift-sdn-default-cni-network-provider
probe::vm.kmem_cache_alloc
probe::vm.kmem_cache_alloc Name probe::vm.kmem_cache_alloc - Fires when kmem_cache_alloc is requested Synopsis vm.kmem_cache_alloc Values bytes_alloc allocated Bytes ptr pointer to the kmemory allocated name name of the probe point bytes_req requested Bytes gfp_flags type of kmemory to allocate caller_function name of the caller function. gfp_flag_name type of kmemory to allocate(in string format) call_site address of the function calling this kmemory function.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-kmem-cache-alloc
Chapter 26. AWS IAM Component
Chapter 26. AWS IAM Component Available as of Camel version 2.23 The KMS component supports create, run, start, stop and terminate AWS IAM instances. Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon IAM. More information are available at Amazon IAM . 26.1. URI Format aws-kms://label[?options] You can append query options to the URI in the following format, ?options=value&option2=value&... 26.2. URI Options The AWS IAM component supports 5 options, which are listed below. Name Description Default Type configuration (advanced) The AWS IAM default configuration IAMConfiguration accessKey (producer) Amazon AWS Access Key String secretKey (producer) Amazon AWS Secret Key String region (producer) The region in which IAM client needs to work String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The AWS IAM endpoint is configured using URI syntax: with the following path and query parameters: 26.2.1. Path Parameters (1 parameters): Name Description Default Type label Required Logical name String 26.2.2. Query Parameters (8 parameters): Name Description Default Type accessKey (producer) Amazon AWS Access Key String iamClient (producer) To use a existing configured AWS IAM as client AmazonIdentity ManagementClient operation (producer) Required The operation to perform IAMOperations proxyHost (producer) To define a proxy host when instantiating the KMS client String proxyPort (producer) To define a proxy port when instantiating the KMS client Integer region (producer) The region in which KMS client needs to work String secretKey (producer) Amazon AWS Secret Key String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 26.3. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.aws-iam.access-key Amazon AWS Access Key String camel.component.aws-iam.configuration.access-key Amazon AWS Access Key String camel.component.aws-iam.configuration.iam-client To use a existing configured AWS IAM as client AmazonIdentity ManagementClient camel.component.aws-iam.configuration.operation The operation to perform IAMOperations camel.component.aws-iam.configuration.proxy-host To define a proxy host when instantiating the KMS client String camel.component.aws-iam.configuration.proxy-port To define a proxy port when instantiating the KMS client Integer camel.component.aws-iam.configuration.region The region in which KMS client needs to work String camel.component.aws-iam.configuration.secret-key Amazon AWS Secret Key String camel.component.aws-iam.enabled Whether to enable auto configuration of the aws-iam component. This is enabled by default. Boolean camel.component.aws-iam.region The region in which IAM client needs to work String camel.component.aws-iam.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.aws-iam.secret-key Amazon AWS Secret Key String Required IAM component options You have to provide the amazonKmsClient in the Registry or your accessKey and secretKey to access the Amazon IAM service. 26.4. Usage 26.4.1. Message headers evaluated by the IAM producer Header Type Description CamelAwsIAMOperation String The operation we want to perform CamelAwsIAMUsername String The username for the user you want to manage CamelAwsIAMAccessKeyID String The accessKey you want to manage CamelAwsIAMAccessKeyStatus String The Status of the AccessKey you want to set, possible value are active and inactive 26.4.2. IAM Producer operations Camel-AWS IAM component provides the following operation on the producer side: listAccessKeys createUser deleteUser listUsers getUser createAccessKey deleteAccessKey updateAccessKey Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.16 or higher). 26.5. See Also Configuring Camel Component Endpoint Getting Started AWS Component
[ "aws-kms://label[?options]", "aws-iam:label", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/aws-iam-component
Chapter 8. Configuring multi-supplier replication with certificate-based authentication
Chapter 8. Configuring multi-supplier replication with certificate-based authentication When you set up replication between two Directory Server instances, you can use certificate-based authentication instead of using a bind DN and password to authenticate to a replication partner. You can do so by adding a new server to the replication topology and setting up replication agreements between the new host and the existing server using certificate-based authentication. Important Certificate-based authentication requires TLS-encrypted connections. 8.1. Preparing accounts and a bind group for the use in replication agreements with certificate-based authentication To use certificate-based authentication in replication agreements, first prepare the accounts and store the client certificates in the userCertificate attributes of these accounts. Additionally, this procedure creates a bind group that you later use in the replication agreements. Perform this procedure on the existing host server1.example.com . Prerequisites You enabled TLS encryption in Directory Server. You stored the client certificates in distinguished encoding rules (DER) format in the /root/server1.der and /root/server2.der files. For details about client certificates and how to request them from your certificate authority (CA), see your CA's documentation. Procedure Create the ou=services entry if it does not exist: # ldapadd -D " cn=Directory Manager " -W -H ldaps://server1.example.com -x dn: ou=services,dc=example,dc=com objectClass: organizationalunit objectClass: top ou: services Create accounts for both servers, such as cn=server1,ou=services,dc=example,dc=com and cn=server1,ou=services,dc=example,dc=com : # ldapadd -D " cn=Directory Manager " -W -H ldaps://server1.example.com -x dn: cn=server1,ou=services,dc=example,dc=com objectClass: top objectClass: person objectClass: inetOrgPerson sn: server1 cn: server1 userPassword: password userCertificate:< file:// /root/server1.der adding new entry "cn=server1,ou=services,dc=example,dc=com" dn: cn=server2,ou=services,dc=example,dc=com objectClass: top objectClass: person objectClass: inetOrgPerson sn: server2 cn: server2 userPassword: password userCertificate:< file:// /root/server2.der adding new entry "cn=server2,ou=services,dc=example,dc=com" Create a group, such as cn=repl_servers,dc=groups,dc=example,dc=com : # dsidm -D " cn=Directory Manager " ldaps://server1.example.com -b " dc=example,dc=com " group create --cn " repl_servers " Add the two replication accounts as members to the group: # dsidm -D " cn=Directory Manager " ldaps://server1.example.com -b " dc=example,dc=com " group add_member repl_servers "cn=server1,ou=services,dc=example,dc=com" # dsidm -D " cn=Directory Manager " ldaps://server1.example.com -b " dc=example,dc=com " group add_member repl_servers "cn=server2,ou=services,dc=example,dc=com" Additional resources Enabling TLS-encrypted connections to Directory Server 8.2. Initializing a new server using a temporary replication manager account Certificate-based authentication uses the certificates stored in the directory. However, before you initialize a new server, the database on server2.example.com is empty and the accounts with the associated certificates do not exist. Therefore, replication using certificates is not possible before the database is initialized. You can overcome this problem by initializing server2.example.com with a temporary replication manager account. Prerequisites You installed the Directory Server instance on server2.example.com . For details, see Setting up a new instance on the command line using a .inf file . The database for the dc=example,dc=com suffix exists. You enabled TLS encryption in Directory Server on both servers, server1.example.com and server2.example.com . Procedure On server2.example.com , enable replication for the dc=example,dc=com suffix: # dsconf -D " cn=Directory Manager " ldaps://server2.example.com replication enable --suffix " dc=example,dc=com " --role " supplier " --replica-id 2 --bind-dn " cn=replication manager,cn=config " --bind-passwd " password " This command configures the server2.example.com host as a supplier for the dc=example,dc=com suffix, and sets the replica ID of this host to 2 . Additionally, the command creates a temporary cn=replication manager,cn=config user with the specified password and allows this account to replicate changes for the suffix to this host. The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. On server1.example.com : Enable replication: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com replication enable --suffix=" dc=example,dc=com " --role=" supplier " --replica-id=" 1 " Create a temporary replication agreement which uses the temporary account from the step for authentication: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt create --suffix=" dc=example,dc=com " --host=" server1.example.com " --port= 636 --conn-protocol= LDAPS --bind-dn=" cn=Replication Manager,cn=config " --bind-passwd=" password " --bind-method= SIMPLE --init temporary_agreement Verification Verify that the initialization was successful: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt init-status --suffix " dc=example,dc=com " temporary_agreement Agreement successfully initialized. Additional resources Installing Red Hat Directory Server Enabling TLS-encrypted connections to Directory Server 8.3. Configuring multi-supplier replication with certificate-based authentication In a multi-supplier replication environment with certificate-based authentication, the replicas authenticate each others using certificates. Prerequisites You set up certificate-based authentication on both hosts, server1.example.com and server2.example.com . Directory Server trusts the certificate authority (CA) that issues the client certificates. The client certificates meet the requirements set in /etc/dirsrv/slapd-instance_name/certmap.conf on the servers. Procedure On server1.example.com : Remove the temporary replication agreement: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt delete --suffix=" dc=example,dc=com " temporary_agreement Add the cn=repl_servers,dc=groups,dc=example,dc=com bind group to the replication settings: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com replication set --suffix=" dc=example,dc=com " --repl-bind-group "cn=repl_servers,dc=groups,dc=example,dc=com" Configure Directory Server to automatically check for changes in the bind group: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com replication set --suffix=" dc=example,dc=com " --repl-bind-group-interval= 0 On server2.example.com : Remove the temporary replication manager account: # dsconf -D " cn=Directory Manager " ldaps://server2.example.com replication delete-manager --suffix=" dc=example,dc=com " --name=" Replication Manager " Add the cn=repl_servers,dc=groups,dc=example,dc=com bind group to the replication settings: # dsconf -D " cn=Directory Manager " ldaps://server2.example.com replication set --suffix=" dc=example,dc=com " --repl-bind-group "cn=repl_servers,dc=groups,dc=example,dc=com" Configure Directory Server to automatically check for changes in the bind group: # dsconf -D " cn=Directory Manager " ldap://server2.example.com replication set --suffix=" dc=example,dc=com " --repl-bind-group-interval=0 Create the replication agreement with certificate-based authentication: dsconf -D " cn=Directory Manager " ldaps://server2.example.com repl-agmt create --suffix=" dc=example,dc=com " --host=" server1.example.com " --port= 636 --conn-protocol= LDAPS --bind-method=" SSLCLIENTAUTH " --init server2-to-server1 On server1.example.com , create the replication agreement with certificate-based authentication: dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt create --suffix=" dc=example,dc=com " --host=" server2.example.com " --port= 636 --conn-protocol= LDAPS --bind-method=" SSLCLIENTAUTH " --init server1-to-server2 Verification Verify on each server that the initialization was successful: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt init-status --suffix " dc=example,dc=com " server1-to-server2 Agreement successfully initialized. # dsconf -D " cn=Directory Manager " ldaps://server2.example.com repl-agmt init-status --suffix " dc=example,dc=com " server2-to-server1 Agreement successfully initialized. Additional resources Setting up certificate-based authentication Changing the CA trust flags
[ "ldapadd -D \" cn=Directory Manager \" -W -H ldaps://server1.example.com -x dn: ou=services,dc=example,dc=com objectClass: organizationalunit objectClass: top ou: services", "ldapadd -D \" cn=Directory Manager \" -W -H ldaps://server1.example.com -x dn: cn=server1,ou=services,dc=example,dc=com objectClass: top objectClass: person objectClass: inetOrgPerson sn: server1 cn: server1 userPassword: password userCertificate:< file:// /root/server1.der adding new entry \"cn=server1,ou=services,dc=example,dc=com\" dn: cn=server2,ou=services,dc=example,dc=com objectClass: top objectClass: person objectClass: inetOrgPerson sn: server2 cn: server2 userPassword: password userCertificate:< file:// /root/server2.der adding new entry \"cn=server2,ou=services,dc=example,dc=com\"", "dsidm -D \" cn=Directory Manager \" ldaps://server1.example.com -b \" dc=example,dc=com \" group create --cn \" repl_servers \"", "dsidm -D \" cn=Directory Manager \" ldaps://server1.example.com -b \" dc=example,dc=com \" group add_member repl_servers \"cn=server1,ou=services,dc=example,dc=com\" dsidm -D \" cn=Directory Manager \" ldaps://server1.example.com -b \" dc=example,dc=com \" group add_member repl_servers \"cn=server2,ou=services,dc=example,dc=com\"", "dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com replication enable --suffix \" dc=example,dc=com \" --role \" supplier \" --replica-id 2 --bind-dn \" cn=replication manager,cn=config \" --bind-passwd \" password \"", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com replication enable --suffix=\" dc=example,dc=com \" --role=\" supplier \" --replica-id=\" 1 \"", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt create --suffix=\" dc=example,dc=com \" --host=\" server1.example.com \" --port= 636 --conn-protocol= LDAPS --bind-dn=\" cn=Replication Manager,cn=config \" --bind-passwd=\" password \" --bind-method= SIMPLE --init temporary_agreement", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt init-status --suffix \" dc=example,dc=com \" temporary_agreement Agreement successfully initialized.", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt delete --suffix=\" dc=example,dc=com \" temporary_agreement", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com replication set --suffix=\" dc=example,dc=com \" --repl-bind-group \"cn=repl_servers,dc=groups,dc=example,dc=com\"", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com replication set --suffix=\" dc=example,dc=com \" --repl-bind-group-interval= 0", "dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com replication delete-manager --suffix=\" dc=example,dc=com \" --name=\" Replication Manager \"", "dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com replication set --suffix=\" dc=example,dc=com \" --repl-bind-group \"cn=repl_servers,dc=groups,dc=example,dc=com\"", "dsconf -D \" cn=Directory Manager \" ldap://server2.example.com replication set --suffix=\" dc=example,dc=com \" --repl-bind-group-interval=0", "dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com repl-agmt create --suffix=\" dc=example,dc=com \" --host=\" server1.example.com \" --port= 636 --conn-protocol= LDAPS --bind-method=\" SSLCLIENTAUTH \" --init server2-to-server1", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt create --suffix=\" dc=example,dc=com \" --host=\" server2.example.com \" --port= 636 --conn-protocol= LDAPS --bind-method=\" SSLCLIENTAUTH \" --init server1-to-server2", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt init-status --suffix \" dc=example,dc=com \" server1-to-server2 Agreement successfully initialized. dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com repl-agmt init-status --suffix \" dc=example,dc=com \" server2-to-server1 Agreement successfully initialized." ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/securing_red_hat_directory_server/assembly_configuring-multi-supplier-replication-with-certificate-based-authentication_securing-rhds
Chapter 6. Working with nodes
Chapter 6. Working with nodes 6.1. Viewing and listing the nodes in your OpenShift Container Platform cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 6.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.25.0 node1.example.com Ready worker 7h v1.25.0 node2.example.com Ready worker 7h v1.25.0 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.25.0 node1.example.com NotReady,SchedulingDisabled worker 7h v1.25.0 node2.example.com Ready worker 7h v1.25.0 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.25.0 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.25.0-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.25.0 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.25.0-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.25.0 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.25.0-30.rhaos4.10.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.25.0 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Example output Name: node1.example.com 1 Roles: worker 2 Labels: beta.kubernetes.io/arch=amd64 3 beta.kubernetes.io/instance-type=m4.large beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-east-2 failure-domain.beta.kubernetes.io/zone=us-east-2a kubernetes.io/hostname=ip-10-0-140-16 node-role.kubernetes.io/worker= Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.25.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.25.0 Kube-Proxy Version: v1.25.0 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Note The control plane label is not automatically added to newly created or updated master nodes. If you want to use the control plane label for your nodes, you can manually configure the label. For more information, see Understanding how to update labels on nodes in the Additional resources section. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 6.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. Additional resources Understanding how to update labels on nodes 6.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on selected nodes: USD oc get pod --selector=<nodeSelector> USD oc get pod --selector=kubernetes.io/os Or: USD oc get pod -l=<nodeSelector> USD oc get pod -l kubernetes.io/os=linux To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 6.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 6.2. Working with nodes As an administrator, you can perform several tasks to make your clusters more efficient. 6.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.25.0 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 6.2.2. Understanding how to update labels on nodes You can update any label on a node. Node labels are not persisted after a node is deleted even if the node is backed up by a Machine. Note Any change to a MachineSet object is not applied to existing machines owned by the compute machine set. For example, labels edited or added to an existing MachineSet object are not propagated to existing machines and nodes associated with the compute machine set. The following command adds or updates labels on a node: USD oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n> For example: USD oc label nodes webconsole-7f7f6 unhealthy=true Tip You can alternatively apply the following YAML to apply the label: kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #... The following command updates all pods in the namespace: USD oc label pods --all <key_1>=<value_1> For example: USD oc label pods --all status=unhealthy 6.2.3. Understanding how to mark nodes as unschedulable or schedulable By default, healthy nodes with a Ready status are marked as schedulable, which means that you can place new pods on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected. The following command marks a node or nodes as unschedulable: Example output USD oc adm cordon <node> For example: USD oc adm cordon node1.example.com Example output node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled The following command marks a currently unschedulable node or nodes as schedulable: USD oc adm uncordon <node1> Alternatively, instead of specifying specific node names (for example, <node> ), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable. 6.2.4. Handling errors in single-node OpenShift clusters when the node reboots without draining application pods In single-node OpenShift clusters and in OpenShift Container Platform clusters in general, a situation can arise where a node reboot occurs without first draining the node. This can occur where an application pod requesting devices fails with the UnexpectedAdmissionError error. Deployment , ReplicaSet , or DaemonSet errors are reported because the application pods that require those devices start before the pod serving those devices. You cannot control the order of pod restarts. While this behavior is to be expected, it can cause a pod to remain on the cluster even though it has failed to deploy successfully. The pod continues to report UnexpectedAdmissionError . This issue is mitigated by the fact that application pods are typically included in a Deployment , ReplicaSet , or DaemonSet . If a pod is in this error state, it is of little concern because another instance should be running. Belonging to a Deployment , ReplicaSet , or DaemonSet guarantees the successful creation and execution of subsequent pods and ensures the successful deployment of the application. There is ongoing work upstream to ensure that such pods are gracefully terminated. Until that work is resolved, run the following command for a single-node OpenShift cluster to remove the failed pods: USD oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE> Note The option to drain the node is unavailable for single-node OpenShift clusters. Additional resources Understanding how to evacuate pods on nodes 6.2.5. Deleting nodes 6.2.5.1. Deleting nodes from a cluster To delete a node from the OpenShift Container Platform cluster, scale down the appropriate MachineSet object. Important When a cluster is integrated with a cloud provider, you must delete the corresponding machine to delete a node. Do not try to use the oc delete node command for this task. When you delete a node by using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods that are not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Note If you are running cluster on bare metal, you cannot delete a node by editing MachineSet objects. Compute machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <cluster-id>-worker-<aws-region-az> . Scale down the compute machine set by using one of the following methods: Specify the number of replicas to scale down to by running the following command: USD oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api Edit the compute machine set custom resource by running the following command: USD oc edit machineset <machine-set-name> -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... name: <machine-set-name> namespace: openshift-machine-api # ... spec: replicas: 2 1 # ... 1 Specify the number of replicas to scale down to. Additional resources Manually scaling a compute machine set 6.2.5.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 6.3. Managing nodes OpenShift Container Platform uses a KubeletConfig custom resource (CR) to manage the configuration of nodes. By creating an instance of a KubeletConfig object, a managed machine config is created to override setting on the node. Note Logging in to remote machines for the purpose of changing their configuration is not supported. 6.3.1. Modifying nodes To make configuration changes to a cluster, or machine pool, you must create a custom resource definition (CRD), or kubeletConfig object. OpenShift Container Platform uses the Machine Config Controller to watch for changes introduced through the CRD to apply the changes to the cluster. Note Because the fields in a kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the validation of those fields is handled directly by the kubelet itself. Please refer to the relevant Kubernetes documentation for the valid values for these fields. Invalid values in the kubeletConfig object can render cluster nodes unusable. Procedure Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure. Perform one of the following steps: Check current labels of the desired machine config pool. For example: USD oc get machineconfigpool --show-labels Example output NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False Add a custom label to the desired machine config pool. For example: USD oc label machineconfigpool worker custom-kubelet=enabled Create a kubeletconfig custom resource (CR) for your configuration change. For example: Sample configuration for a custom-config CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label to apply the configuration change, this is the label you added to the machine config pool. 3 Specify the new value(s) you want to change. Create the CR object. USD oc create -f <file-name> For example: USD oc create -f master-kube-config.yaml Most Kubelet Configuration options can be set by the user. The following options are not allowed to be overwritten: CgroupDriver ClusterDNS ClusterDomain StaticPodPath Note If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. You can disable the image limit by editing the KubeletConfig object and setting the value of nodeStatusMaxImages to -1 . 6.3.2. Configuring control plane nodes as schedulable You can configure control plane nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. Note You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable field. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Procedure Edit the schedulers.config.openshift.io resource. USD oc edit schedulers.config.openshift.io cluster Configure the mastersSchedulable field. apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-09-10T03:04:05Z" generation: 1 name: cluster resourceVersion: "433" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #... 1 Set to true to allow control plane nodes to be schedulable, or false to disallow control plane nodes to be schedulable. Save the file to apply the changes. 6.3.3. Setting SELinux booleans OpenShift Container Platform allows you to enable and disable an SELinux boolean on a Red Hat Enterprise Linux CoreOS (RHCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup as the example boolean. You can modify this value to whichever boolean you need. Prerequisites You have installed the OpenShift CLI (oc). Procedure Create a new YAML file with a MachineConfig object, displayed in the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #... Create the new MachineConfig object by running the following command: USD oc create -f 99-worker-setsebool.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. 6.3.4. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. Important OpenShift Container Platform cgroups version 2 support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.25.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.25.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.25.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.25.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.25.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.25.0 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 6.3.5. Enabling swap memory use on nodes Important Enabling swap memory use on nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can enable swap memory use for OpenShift Container Platform workloads on a per-node basis. Warning Enabling swap memory can negatively impact workload performance and out-of-resource handling. Do not enable swap memory on control plane nodes. To enable swap memory, create a kubeletconfig custom resource (CR) to set the swapbehavior parameter. You can set limited or unlimited swap memory: Limited: Use the LimitedSwap value to limit how much swap memory workloads can use. Any workloads on the node that are not managed by OpenShift Container Platform can still use swap memory. The LimitedSwap behavior depends on whether the node is running with Linux control groups version 1 (cgroups v1) or version 2 (cgroup v2) : cgroup v1: OpenShift Container Platform workloads can use any combination of memory and swap, up to the pod's memory limit, if set. cgroup v2: OpenShift Container Platform workloads cannot use swap memory. Unlimited: Use the UnlimitedSwap value to allow workloads to use as much swap memory as they request, up to the system limit. Because the kubelet will not start in the presence of swap memory without this configuration, you must enable swap memory in OpenShift Container Platform before enabling swap memory on the nodes. If there is no swap memory present on a node, enabling swap memory in OpenShift Container Platform has no effect. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.10 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set on the cluster (see Nodes Working with clusters Enabling features using feature gates ). Note Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. If cgroup v2 is enabled on a node, you must enable swap accounting on the node, by setting the swapaccount=1 kernel argument. Procedure Apply a custom label to the machine config pool where you want to allow swap memory. USD oc label machineconfigpool worker kubelet-swap=enabled Create a custom resource (CR) to enable and configure swap settings. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #... 1 Set to false to enable swap memory use on the associated nodes. Set to true to disable swap memory use. 2 Specify the swap memory behavior. If unspecified, the default is LimitedSwap . Enable swap memory on the machines. 6.3.6. Migrating control plane nodes from one RHOSP host to another You can run a script that moves a control plane node from one Red Hat OpenStack Platform (RHOSP) node to another. Prerequisites The environment variable OS_CLOUD refers to a clouds entry that has administrative credentials in a clouds.yaml file. The environment variable KUBECONFIG refers to a configuration that contains administrative OpenShift Container Platform credentials. Procedure From a command line, run the following script: #!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo "Usage: 'USD0 node_name'" exit 64 fi # Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; } # Check for admin OpenShift credentials oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; } set -x declare -r node_name="USD1" declare server_id server_id="USD(openstack server list --all-projects -f value -c ID -c Name | grep "USDnode_name" | cut -d' ' -f1)" readonly server_id # Drain the node oc adm cordon "USDnode_name" oc adm drain "USDnode_name" --delete-emptydir-data --ignore-daemonsets --force # Power off the server oc debug "node/USD{node_name}" -- chroot /host shutdown -h 1 # Verify the server is shut off until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Migrate the node openstack server migrate --wait "USDserver_id" # Resize the VM openstack server resize confirm "USDserver_id" # Wait for the resize confirm to finish until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Restart the VM openstack server start "USDserver_id" # Wait for the node to show up as Ready: until oc get node "USDnode_name" | grep -q "^USD{node_name}[[:space:]]\+Ready"; do sleep 5; done # Uncordon the node oc adm uncordon "USDnode_name" # Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done If the script completes, the control plane machine is migrated to a new RHOSP node. 6.4. Managing the maximum number of pods per node In OpenShift Container Platform, you can configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit or both. If you use both options, the lower of the two limits the number of pods on a node. When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 6.4.1. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 6.5. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 6.5.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.5.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently ship any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.5.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.5.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 6.6. Remediating, fencing, and maintaining nodes When node-level failures occur, such as the kernel hangs or network interface controllers (NICs) fail, the work required from the cluster does not decrease, and workloads from affected nodes need to be restarted somewhere. Failures affecting these workloads risk data loss, corruption, or both. It is important to isolate the node, known as fencing , before initiating recovery of the workload, known as remediation , and recovery of the node. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 6.7. Understanding node rebooting To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes. Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases. 6.7.1. About rebooting nodes running critical infrastructure When rebooting nodes that host critical OpenShift Container Platform infrastructure components, such as router pods, registry pods, and monitoring pods, ensure that there are at least three nodes available to run these components. The following scenario demonstrates how service interruptions can occur with applications running on OpenShift Container Platform when only two nodes are available: Node A is marked unschedulable and all pods are evacuated. The registry pod running on that node is now redeployed on node B. Node B is now running both registry pods. Node B is now marked unschedulable and is evacuated. The service exposing the two pod endpoints on node B loses all endpoints, for a brief period of time, until they are redeployed to node A. When using three nodes for infrastructure components, this process does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back into rotation does not have a registry pod. One of the other nodes has two registry pods. To schedule the third registry pod on the last node, use pod anti-affinity to prevent the scheduler from locating two registry pods on the same node. Additional information For more information on pod anti-affinity, see Placing pods relative to other pods using affinity and anti-affinity rules . 6.7.2. Rebooting a node using pod anti-affinity Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred. With this in place, if only two infrastructure nodes are available and one is rebooted, the container image registry pod is prevented from running on the other node. oc get pods reports the pod as unready until a suitable node is available. Once a node is available and all pods are back in ready state, the node can be restarted. Procedure To reboot a node using pod anti-affinity: Edit the node specification to configure pod anti-affinity: apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #... 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . This example assumes the container image registry pod has a label of registry=default . Pod anti-affinity can use any Kubernetes match expression. Enable the MatchInterPodAffinity scheduler predicate in the scheduling policy file. Perform a graceful restart of the node. 6.7.3. Understanding how to reboot nodes running routers In most cases, a pod running an OpenShift Container Platform router exposes a host port. The PodFitsPorts scheduler predicate ensures that no router pods using the same port can run on the same node, and pod anti-affinity is achieved. If the routers are relying on IP failover for high availability, there is nothing else that is needed. For router pods relying on an external service such as AWS Elastic Load Balancing for high availability, it is that service's responsibility to react to router pod restarts. In rare cases, a router pod may not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes. 6.7.4. Rebooting a node gracefully Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction Access the node in debug mode: USD oc debug node/<node1> Change your root directory to /host : USD chroot /host Restart the node: USD systemctl reboot In a moment, the node enters the NotReady state. Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and perform the reboot. USD ssh core@<master-node>.<cluster_name>.<base_domain> USD sudo systemctl reboot After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and uncordon it. USD ssh core@<target_node> USD sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional information For information on etcd data backup, see Backing up etcd data . 6.8. Freeing node resources using garbage collection As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection. The OpenShift Container Platform node performs two types of garbage collection: Container garbage collection: Removes terminated containers. Image garbage collection: Removes images not referenced by any running pods. 6.8.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 6.2. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the evictionpressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. Note Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. You cannot set an eviction pressure transition period to zero seconds. 6.8.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 6.3. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 6.8.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 6.9. Allocating resources for nodes in an OpenShift Container Platform cluster To provide more reliable scheduling and minimize node resource overcommitment, reserve a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy , and the remaining system components, such as sshd and NetworkManager . By specifying the resources to reserve, you provide the scheduler with more information about the remaining CPU and memory resources that a node has available for use by pods. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes or you can manually determine and set the best resources for your nodes. Important To manually set resource values, you must use a kubelet config CR. You cannot use a machine config CR. 6.9.1. Understanding how to allocate resources for nodes CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings: Setting Description kube-reserved This setting is not used with OpenShift Container Platform. Add the CPU and memory resources that you planned to reserve to the system-reserved setting. system-reserved This setting identifies the resources to reserve for the node components and system components, such as CRI-O and Kubelet. The default settings depend on the OpenShift Container Platform and Machine Config Operator versions. Confirm the default systemReserved parameter on the machine-config-operator repository. If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node's capacity as it was before the introduction of allocatable resources. Note Any CPUs specifically reserved using the reservedSystemCPUs parameter are not available for allocation using kube-reserved or system-reserved . 6.9.1.1. How OpenShift Container Platform computes allocated resources An allocated amount of a resource is computed based on the following formula: Note The withholding of Hard-Eviction-Thresholds from Allocatable improves system reliability because the value for Allocatable is enforced for pods at the node level. If Allocatable is negative, it is set to 0 . Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary . 6.9.1.2. How nodes enforce resource constraints The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use. The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons. Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved . Enforcing system-reserved limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce system-reserved only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer. 6.9.1.3. Understanding Eviction Thresholds If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling. You can reserve some memory using the --eviction-hard flag. The node attempts to evict pods whenever memory availability on the node drops below the absolute value or percentage. If system daemons do not exist on a node, pods are limited to the memory capacity - eviction-hard . For this reason, resources set aside as a buffer for eviction before reaching out of memory conditions are not available for pods. The following is an example to illustrate the impact of node allocatable for memory: Node capacity is 32Gi --system-reserved is 3Gi --eviction-hard is set to 100Mi . For this node, the effective node allocatable value is 28.9Gi . If the node and system components use all their reservation, the memory available for pods is 28.9Gi , and kubelet evicts pods when it exceeds this threshold. If you enforce node allocatable, 28.9Gi , with top-level cgroups, then pods can never exceed 28.9Gi . Evictions are not performed unless system daemons consume more than 3.1Gi of memory. If system daemons do not use up all their reservation, with the above example, pods would face memcg OOM kills from their bounding cgroup before node evictions kick in. To better enforce QoS under this situation, the node applies the hard eviction thresholds to the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds . If system daemons do not use up all their reservation, the node will evict pods whenever they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod will be OOM killed if pods consume 29Gi of memory. 6.9.1.4. How the scheduler determines resource availability The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. By default, the node will report its machine capacity as fully schedulable by the cluster. 6.9.2. Automatically allocating resources for nodes OpenShift Container Platform can automatically determine the optimal system-reserved CPU and memory resources for nodes associated with a specific machine config pool and update the nodes with those values when the nodes start. By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . To automatically determine and allocate the system-reserved resources on nodes, create a KubeletConfig custom resource (CR) to set the autoSizingReserved: true parameter. A script on each node calculates the optimal values for the respective reserved resources based on the installed CPU and memory capacity on each node. The script takes into account that increased capacity requires a corresponding increase in the reserved resources. Automatically determining the optimal system-reserved settings ensures that your cluster is running efficiently and prevents node failure due to resource starvation of system components, such as CRI-O and kubelet, without your needing to manually calculate and update the values. This feature is disabled by default. Prerequisites Obtain the label associated with the static MachineConfigPool object for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels . Tip If an appropriate label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change: Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Assign a name to CR. 2 Add the autoSizingReserved parameter set to true to allow OpenShift Container Platform to automatically determine and allocate the system-reserved resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to false . 3 Specify the label from the machine config pool that you configured in the "Prerequisites" section. You can choose any desired labels for the machine config pool, such as custom-kubelet: small-pods , or the default label, pools.operator.machineconfiguration.openshift.io/worker: "" . The example enables automatic resource allocation on all worker nodes. OpenShift Container Platform drains the nodes, applies the kubelet config, and restarts the nodes. Create the CR by entering the following command: USD oc create -f <file_name>.yaml Verification Log in to a node you configured by entering the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: # chroot /host View the /etc/node-sizing.env file: Example output SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08 The kubelet uses the system-reserved values in the /etc/node-sizing.env file. In the example, the worker nodes are allocated 0.08 CPU and 3 Gi of memory. It can take several minutes for the optimal values to appear. 6.9.3. Manually allocating resources for nodes OpenShift Container Platform supports the CPU and memory resource types for allocation. The ephemeral-resource resource type is also supported. For the cpu type, you specify the resource quantity in units of cores, such as 200m , 0.5 , or 1 . For memory and ephemeral-storage , you specify the resource quantity in units of bytes, such as 200Ki , 50Mi , or 5Gi . By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . As an administrator, you can set these values by using a kubelet config custom resource (CR) through a set of <resource_type>=<resource_quantity> pairs (e.g., cpu=200m,memory=512Mi ). Important You must use a kubelet config CR to manually set resource values. You cannot use a machine config CR. For details on the recommended system-reserved values, refer to the recommended system-reserved values . Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the resources to reserve for the node components and system components. Run the following command to create the CR: USD oc create -f <file_name>.yaml 6.10. Allocating specific CPUs for nodes in a cluster When using the static CPU Manager policy , you can reserve specific CPUs for use by specific nodes in your cluster. For example, on a system with 24 CPUs, you could reserve CPUs numbered 0 - 3 for the control plane allowing the compute nodes to use CPUs 4 - 23. 6.10.1. Reserving CPUs for nodes To explicitly define a list of CPUs that are reserved for specific nodes, create a KubeletConfig custom resource (CR) to define the reservedSystemCPUs parameter. This list supersedes the CPUs that might be reserved using the systemReserved parameter. Procedure Obtain the label associated with the machine config pool (MCP) for the type of node you want to configure: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #... 1 Get the MCP label. Create a YAML file for the KubeletConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: "0,1,2,3" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Specify a name for the CR. 2 Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. 3 Specify the label from the MCP. Create the CR object: USD oc create -f <file_name>.yaml Additional resources For more information on the systemReserved parameter, see Allocating resources for nodes in an OpenShift Container Platform cluster . 6.11. Enabling TLS security profiles for the kubelet You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by the kubelet when it is acting as an HTTP server. The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. A TLS security profile defines the TLS ciphers that the Kubernetes API server must use when connecting with the kubelet to protect communication between the kubelet and the Kubernetes API server. Note By default, when the kubelet acts as a client with the Kubernetes API server, it automatically negotiates the TLS parameters with the API server. 6.11.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.4. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.11.2. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig # ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" # ... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #... 6.12. Machine Config Daemon metrics The Machine Config Daemon is a part of the Machine Config Operator. It runs on every node in the cluster. The Machine Config Daemon manages configuration changes and updates on each of the nodes. 6.12.1. Machine Config Daemon metrics Beginning with OpenShift Container Platform 4.3, the Machine Config Daemon provides a set of metrics. These metrics can be accessed using the Prometheus Cluster Monitoring stack. The following table describes this set of metrics. Some entries contain commands for getting specific logs. However, the most comprehensive set of logs is available using the oc adm must-gather command. Note Metrics marked with * in the Name and Description columns represent serious errors that might cause performance problems. Such problems might prevent updates and upgrades from proceeding. Table 6.5. MCO metrics Name Format Description Notes mcd_host_os_and_version []string{"os", "version"} Shows the OS that MCD is running on, such as RHCOS or RHEL. In case of RHCOS, the version is provided. mcd_drain_err* Logs errors received during failed drain. * While drains might need multiple tries to succeed, terminal failed drains prevent updates from proceeding. The drain_time metric, which shows how much time the drain took, might help with troubleshooting. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_pivot_err* []string{"err", "node", "pivot_target"} Logs errors encountered during pivot. * Pivot errors might prevent OS upgrades from proceeding. For further investigation, run this command to see the logs from the machine-config-daemon container: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_state []string{"state", "reason"} State of Machine Config Daemon for the indicated node. Possible states are "Done", "Working", and "Degraded". In case of "Degraded", the reason is included. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_kubelet_state* Logs kubelet health failures. * This is expected to be empty, with failure count of 0. If failure count exceeds 2, the error indicating threshold is exceeded. This indicates a possible issue with the health of the kubelet. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u kubelet mcd_reboot_err* []string{"message", "err", "node"} Logs the failed reboots and the corresponding errors. * This is expected to be empty, which indicates a successful reboot. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_update_state []string{"config", "err"} Logs success or failure of configuration updates and the corresponding errors. The expected value is rendered-master/rendered-worker-XXXX . If the update fails, an error is present. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon Additional resources Monitoring overview Gathering data about your cluster 6.13. Creating infrastructure nodes Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 6.13.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 6.13.1.1. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets
[ "oc get nodes", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.25.0 node1.example.com Ready worker 7h v1.25.0 node2.example.com Ready worker 7h v1.25.0", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.25.0 node1.example.com NotReady,SchedulingDisabled worker 7h v1.25.0 node2.example.com Ready worker 7h v1.25.0", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.25.0 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.25.0-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.25.0 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.25.0-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.25.0 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.25.0-30.rhaos4.10.gitf2f339d.el8-dev", "oc get node <node>", "oc get node node1.example.com", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.25.0", "oc describe node <node>", "oc describe node node1.example.com", "Name: node1.example.com 1 Roles: worker 2 Labels: beta.kubernetes.io/arch=amd64 3 beta.kubernetes.io/instance-type=m4.large beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-east-2 failure-domain.beta.kubernetes.io/zone=us-east-2a kubernetes.io/hostname=ip-10-0-140-16 node-role.kubernetes.io/worker= Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.25.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.25.0 Kube-Proxy Version: v1.25.0 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #", "oc get pod --selector=<nodeSelector>", "oc get pod --selector=kubernetes.io/os", "oc get pod -l=<nodeSelector>", "oc get pod -l kubernetes.io/os=linux", "oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%", "oc adm top node --selector=''", "oc adm cordon <node1>", "node/<node1> cordoned", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.25.0", "oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]", "oc adm drain <node1> <node2> --force=true", "oc adm drain <node1> <node2> --grace-period=-1", "oc adm drain <node1> <node2> --ignore-daemonsets=true", "oc adm drain <node1> <node2> --timeout=5s", "oc adm drain <node1> <node2> --delete-emptydir-data=true", "oc adm drain <node1> <node2> --dry-run=true", "oc adm uncordon <node1>", "oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>", "oc label nodes webconsole-7f7f6 unhealthy=true", "kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #", "oc label pods --all <key_1>=<value_1>", "oc label pods --all status=unhealthy", "oc adm cordon <node>", "oc adm cordon node1.example.com", "node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled", "oc adm uncordon <node1>", "oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>", "oc get machinesets -n openshift-machine-api", "oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api", "oc edit machineset <machine-set-name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get machineconfigpool --show-labels", "NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False", "oc label machineconfigpool worker custom-kubelet=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #", "oc create -f <file-name>", "oc create -f master-kube-config.yaml", "oc edit schedulers.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #", "oc create -f 99-worker-setsebool.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.25.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.25.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.25.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.25.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.25.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.25.0", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "oc label machineconfigpool worker kubelet-swap=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #", "#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "oc debug node/<node1>", "chroot /host", "systemctl reboot", "ssh core@<master-node>.<cluster_name>.<base_domain>", "sudo systemctl reboot", "oc adm uncordon <node1>", "ssh core@<target_node>", "sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "chroot /host", "SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #", "oc create -f <file_name>.yaml", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/nodes/working-with-nodes
Chapter 7. ImageStreamMapping [image.openshift.io/v1]
Chapter 7. ImageStreamMapping [image.openshift.io/v1] Description ImageStreamMapping represents a mapping from a single image stream tag to a container image as well as the reference to the container image stream the image came from. This resource is used by privileged integrators to create an image resource and to associate it with an image stream in the status tags field. Creating an ImageStreamMapping will allow any user who can view the image stream to tag or pull that image, so only create mappings where the user has proven they have access to the image contents directly. The only operation supported for this resource is create and the metadata name and namespace should be set to the image stream containing the tag that should be updated. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required image tag 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata tag string Tag is a string value this image can be located with inside the stream. 7.1.1. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 7.1.2. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 7.1.3. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 7.1.4. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 7.1.5. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 7.1.6. .image.signatures Description Signatures holds all signatures of the image. Type array 7.1.7. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 7.1.8. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 7.1.9. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 7.1.10. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 7.1.11. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 7.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreammappings POST : create an ImageStreamMapping 7.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreammappings Table 7.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create an ImageStreamMapping Table 7.2. Body parameters Parameter Type Description body ImageStreamMapping schema Table 7.3. HTTP responses HTTP code Reponse body 200 - OK ImageStreamMapping schema 201 - Created ImageStreamMapping schema 202 - Accepted ImageStreamMapping schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/image_apis/imagestreammapping-image-openshift-io-v1
Chapter 8. Checking integrity with AIDE
Chapter 8. Checking integrity with AIDE Advanced Intrusion Detection Environment (AIDE) is a utility that creates a database of files on the system, and then uses that database to ensure file integrity and detect system intrusions. 8.1. Installing AIDE To start file-integrity checking with AIDE, you must install the corresponding package and initiate the AIDE database. Prerequisites The AppStream repository is enabled. Procedure Install the aide package: Generate an initial database: Optional: In the default configuration, the aide --init command checks just a set of directories and files defined in the /etc/aide.conf file. To include additional directories or files in the AIDE database, and to change their watched parameters, edit /etc/aide.conf accordingly. To start using the database, remove the .new substring from the initial database file name: Optional: To change the location of the AIDE database, edit the /etc/aide.conf file and modify the DBDIR value. For additional security, store the database, configuration, and the /usr/sbin/aide binary file in a secure location such as a read-only media. 8.2. Performing integrity checks with AIDE You can use the crond service to schedule regular file-integrity checks with AIDE. Prerequisites AIDE is properly installed and its database is initialized. See Installing AIDE Procedure To initiate a manual check: At a minimum, configure the system to run AIDE weekly. Optimally, run AIDE daily. For example, to schedule a daily execution of AIDE at 04:05 a.m. by using the cron command, add the following line to the /etc/crontab file: Additional resources cron(8) man page on your system 8.3. Updating an AIDE database After verifying the changes of your system, such as package updates or configuration files adjustments, update also your baseline AIDE database. Prerequisites AIDE is properly installed and its database is initialized. See Installing AIDE Procedure Update your baseline AIDE database: The aide --update command creates the /var/lib/aide/aide.db.new.gz database file. To start using the updated database for integrity checks, remove the .new substring from the file name. 8.4. File-integrity tools: AIDE and IMA Red Hat Enterprise Linux provides several tools for checking and preserving the integrity of files and directories on your system. The following table helps you decide which tool better fits your scenario. Table 8.1. Comparison between AIDE and IMA Question Advanced Intrusion Detection Environment (AIDE) Integrity Measurement Architecture (IMA) What AIDE is a utility that creates a database of files and directories on the system. This database serves for checking file integrity and detect intrusion detection. IMA detects if a file is altered by checking file measurement (hash values) compared to previously stored extended attributes. How AIDE uses rules to compare the integrity state of the files and directories. IMA uses file hash values to detect the intrusion. Why Detection - AIDE detects if a file is modified by verifying the rules. Detection and Prevention - IMA detects and prevents an attack by replacing the extended attribute of a file. Usage AIDE detects a threat when the file or directory is modified. IMA detects a threat when someone tries to alter the entire file. Extension AIDE checks the integrity of files and directories on the local system. IMA ensures security on the local and remote systems. 8.5. Additional resources aide(1) man page on your system Kernel integrity subsystem
[ "dnf install aide", "aide --init Start timestamp: 2024-07-08 10:39:23 -0400 (AIDE 0.16) AIDE initialized database at /var/lib/aide/aide.db.new.gz Number of entries: 55856 --------------------------------------------------- The attributes of the (uncompressed) database(s): --------------------------------------------------- /var/lib/aide/aide.db.new.gz ... SHA512 : mZaWoGzL2m6ZcyyZ/AXTIowliEXWSZqx IFYImY4f7id4u+Bq8WeuSE2jasZur/A4 FPBFaBkoCFHdoE/FW/V94Q==", "mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz", "aide --check Start timestamp: 2024-07-08 10:43:46 -0400 (AIDE 0.16) AIDE found differences between database and filesystem!! Summary: Total number of entries: 55856 Added entries: 0 Removed entries: 0 Changed entries: 1 --------------------------------------------------- Changed entries: --------------------------------------------------- f ... ..S : /root/.viminfo --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /root/.viminfo SELinux : system_u:object_r:admin_home_t:s | unconfined_u:object_r:admin_home 0 | _t:s0 ...", "05 4 * * * root /usr/sbin/aide --check", "aide --update" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/checking-integrity-with-aide_security-hardening
Red Hat Ansible Automation Platform operator backup and recovery guide
Red Hat Ansible Automation Platform operator backup and recovery guide Red Hat Ansible Automation Platform 2.4 Safeguard against data loss with backup and recovery of Ansible Automation Platform operator on OpenShift Container Platform Red Hat Customer Content Services
[ "df -h | grep \"/var/lib/pgsql/data\"", "--- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AWXBackup metadata: name: awxbackup-2024-07-15 namespace: my-namespace spec: deployment_name: controller", "apiVersion: v1 kind: Secret metadata: name: external-restore-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 sslmode: \"prefer\" 5 type: \"unmanaged\" type: Opaque", "oc create -f external-postgres-configuration-secret.yml", "kind: AutomationControllerRestore apiVersion: automationcontroller.ansible.com/v1beta1 metadata: namespace: my-namespace name: awxrestore-2024-07-15 spec: deployment_name: restored_controller backup_name: awxbackup-2024-07-15 postgres_configuration_secret: 'external-restore-postgres-configuration'", "delete automationcontroller <YOUR_DEPLOYMENT_NAME> -n <YOUR_NAMESPACE> delete pvc postgres-13-<YOUR_DEPLOYMENT_NAME>-13-0 -n <YOUR_NAMESPACE>", "apply -f restore.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/red_hat_ansible_automation_platform_operator_backup_and_recovery_guide/index
Chapter 107. OpenSearch
Chapter 107. OpenSearch Since Camel 4.0 Only producer is supported The OpenSearch component allows you to interface with an OpenSearch 2.8.x API using the Java API Client library. 107.1. Dependencies When using bean with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-opensearch-starter</artifactId> </dependency> 107.2. URI format 107.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 107.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file ( application.properties or yaml ), directly with Java code. 107.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer ( from ) or as a producer ( to ) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. 107.4. Component Options The OpenSearch component supports 13 options, which are listed below. Name Description Default Type connectionTimeout (producer) The time in ms to wait before connection will time out. 30000 int hostAddresses (producer) Comma separated list with ip:port formatted remote transport addresses to use. The ip and port options must be left blank for hostAddresses to be considered instead. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean maxRetryTimeout (producer) The time in ms before retry. 30000 int socketTimeout (producer) The timeout in ms to wait before the socket will time out. 30000 int autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean client (advanced) Autowired To use an existing configured OpenSearch client, instead of creating a client per endpoint. This allows customizing the client with specific settings. RestClient enableSniffer (advanced) Enable automatically discover nodes from a running OpenSearch cluster. If this option is used in conjunction with Spring Boot, then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot). false boolean sniffAfterFailureDelay (advanced) The delay of a sniff execution scheduled after a failure (in milliseconds). 60000 int snifferInterval (advanced) The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions. 300000 int enableSSL (security) Enable SSL. false boolean password (security) Password for authenticating. String user (security) Basic authenticate user. String 107.5. Endpoint Options The OpenSearch endpoint is configured using URI syntax: With the following path and query parameters: 107.5.1. Path Parameters (1 parameters) Name Description Default Type clusterName (producer) Required Name of the cluster. String 107.5.2. Query Parameters (19 parameters) Name Description Default Type connectionTimeout (producer) The time in ms to wait before connection will time out. 30000 int disconnect (producer) Disconnect after it finishes calling the producer. false boolean from (producer) Starting index of the response. Integer hostAddresses (producer) Comma separated list with ip:port formatted remote transport addresses to use. String indexName (producer) The name of the index to act against. String maxRetryTimeout (producer) The time in ms before retry. 30000 int operation (producer) What operation to perform. Enum values: Index Update Bulk GetById MultiGet MultiSearch Delete DeleteIndex Search Exists Ping OpensearchOperation scrollKeepAliveMs (producer) Time in ms during which OpenSearch will keep search context alive. 60000 int size (producer) Size of the response. Integer socketTimeout (producer) The timeout in ms to wait before the socket will time out. 30000 int useScroll (producer) Enable scroll usage. false boolean waitForActiveShards (producer) Index creation waits for the right consistency number of shards to be available. 1 int lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean documentClass (advanced) The class to use when deserializing the documents. ObjectNode Class enableSniffer (advanced) Enable automatically discover nodes from a running OpenSearch cluster. If this option is used in conjunction with Spring Boot, then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot). false boolean sniffAfterFailureDelay (advanced) The delay of a sniff execution scheduled after a failure (in milliseconds). 60000 int snifferInterval (advanced) The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions. 300000 int certificatePath (security) The certificate that can be used to access the ES Cluster. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String enableSSL (security) Enable SSL. false boolean 107.6. Message Headers The OpenSearch component supports 9 message header(s), listed below: Name Description Default Type operation (producer) Constant: PARAM_OPERATION The operation to perform. Enum values: Index Update Bulk GetById MultiGet MultiSearch Delete DeleteIndex Search Exists Ping OpensearchOperation indexId (producer) Constant: PARAM_INDEX_ID The id of the indexed document. String indexName (producer) Constant: PARAM_INDEX_NAME The name of the index to act against. String documentClass (producer) Constant: PARAM_DOCUMENT_CLASS The full qualified name of the class of the document to unmarshall. ObjectNode Class waitForActiveShards (producer) Constant: PARAM_WAIT_FOR_ACTIVE_SHARDS The index creation waits for the write consistency number of shards to be available. Integer scrollKeepAliveMs (producer) Constant: PARAM_SCROLL_KEEP_ALIVE_MS The starting index of the response. Integer useScroll (producer) Constant: PARAM_SCROLL Set to true to enable scroll usage. Boolean size (producer) Constant: PARAM_SIZE The size of the response. Integer from (producer) Constant: PARAM_FROM The starting index of the response. Integer 107.7. Usage 107.7.1. Message Operations The following OpenSearch operations are currently supported. Set an endpoint URI option or exchange header with a key of "operation" and a value set to one of the following. Some operations also require other parameters or the message body to be set. operation message body description Index Map , String , byte[] , Reader , InputStream or IndexRequest.Builder content to index Adds content to an index and returns the content's indexId in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the indexId by setting the message header with the key "indexId". GetById String or GetRequest.Builder index id of content to retrieve Retrieves the document corresponding to the given index id and returns a GetResponse object in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the type of document by setting the message header with the key "documentClass". Delete String or DeleteRequest.Builder index id of content to delete Deletes the specified indexName and returns a Result object in the body. You can set the name of the target index by setting the message header with the key "indexName". DeleteIndex String or DeleteIndexRequest.Builder index name of the index to delete Deletes the specified indexName and returns a status code in the body. You can set the name of the target index by setting the message header with the key "indexName". Bulk Iterable or BulkRequest.Builder of any type that is already accepted (DeleteOperation.Builder for delete operation, UpdateOperation.Builder for update operation, CreateOperation.Builder for create operation, byte[], InputStream, String, Reader, Map or any document type for index operation) Adds/Updates/Deletes content from/to an index and returns a List<BulkResponseItem> object in the body You can set the name of the target index by setting the message header with the key "indexName". Search Map , String or SearchRequest.Builder Search the content with the map of query string. You can set the name of the target index by setting the message header with the key "indexName". You can set the number of hits to return by setting the message header with the key "size". You can set the starting document offset by setting the message header with the key "from". MultiSearch MsearchRequest.Builder Multiple search in one MultiGet Iterable<String> or MgetRequest.Builder the id of the document to retrieve Multiple get in one You can set the name of the target index by setting the message header with the key "indexName". Exists None Checks whether the index exists or not and returns a Boolean flag in the body. You must set the name of the target index by setting the message header with the key "indexName". Update byte[] , InputStream , String , Reader , Map or any document type content to update Updates content to an index and returns the content's indexId in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the indexId by setting the message header with the key "indexId". Ping 107.7.2. Configure the component and enable basic authentication To use the OpenSearch component, it has to be configured with a minimum configuration. OpensearchComponent opensearchComponent = new OpensearchComponent(); opensearchComponent.setHostAddresses("opensearch-host:9200"); camelContext.addComponent("opensearch", opensearchComponent); For basic authentication with OpenSearch or using reverse http proxy in front of the OpenSearch cluster, simply setup basic authentication and SSL on the component like the example below OpenSearchComponent opensearchComponent = new OpenSearchComponent(); opensearchComponent.setHostAddresses("opensearch-host:9200"); opensearchComponent.setUser("opensearchuser"); opensearchComponent.setPassword("secure!!"); camelContext.addComponent("opensearch", opensearchComponent); 107.7.3. Document type For all the search operations, it is possible to indicate the type of document to retrieve to get the result already unmarshalled with the expected type. The document type can be set using the header "documentClass" or via the uri parameter of the same name. 107.8. Examples 107.8.1. Index Example Below is a simple INDEX example from("direct:index") .to("opensearch://opensearch?operation=Index&indexName=twitter"); <route> <from uri="direct:index"/> <to uri="opensearch://opensearch?operation=Index&amp;indexName=twitter"/> </route> Note For this operation, you'll need to specify an indexId header. A client only needs to pass a body message containing a Map to the route. The result body contains the indexId created. Map<String, String> map = new HashMap<String, String>(); map.put("content", "test"); String indexId = template.requestBody("direct:index", map, String.class); 107.8.2. Search Example Searching on specific field(s) and value use the Operation \u0301Search \u0301. Pass in the query JSON String or the Map from("direct:search") .to("opensearch://opensearch?operation=Search&indexName=twitter"); <route> <from uri="direct:search"/> <to uri="opensearch://opensearch?operation=Search&amp;indexName=twitter"/> </route> String query = "{\"query\":{\"match\":{\"doc.content\":\"new release of ApacheCamel\"}}}"; HitsMetadata<?> response = template.requestBody("direct:search", query, HitsMetadata.class); Search on specific field(s) using Map. Map<String, Object> actualQuery = new HashMap<>(); actualQuery.put("doc.content", "new release of ApacheCamel"); Map<String, Object> match = new HashMap<>(); match.put("match", actualQuery); Map<String, Object> query = new HashMap<>(); query.put("query", match); HitsMetadata<?> response = template.requestBody("direct:search", query, HitsMetadata.class); Search using OpenSearch scroll api to fetch all results. from("direct:search") .to("opensearch://opensearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000"); <route> <from uri="direct:search"/> <to uri="opensearch://opensearch?operation=Search&amp;indexName=twitter&amp;useScroll=true&amp;scrollKeepAliveMs=30000"/> </route> String query = "{\"query\":{\"match\":{\"doc.content\":\"new release of ApacheCamel\"}}}"; try (OpenSearchScrollRequestIterator response = template.requestBody("direct:search", query, OpenSearchScrollRequestIterator.class)) { // do something smart with results } Split EIP can also be used. from("direct:search") .to("opensearch://opensearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000") .split() .body() .streaming() .to("mock:output") .end(); 107.8.3. MultiSearch Example MultiSearching on specific field(s) and value uses the Operation MultiSearch . Pass in the MultiSearchRequest instance from("direct:multiSearch") .to("opensearch://opensearch?operation=MultiSearch"); <route> <from uri="direct:multiSearch"/> <to uri="opensearch://opensearch?operation=MultiSearch"/> </route> MultiSearch on specific field(s) MsearchRequest.Builder builder = new MsearchRequest.Builder().index("twitter").searches( new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build(), new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build()); List<MultiSearchResponseItem<?>> response = template.requestBody("direct:multiSearch", builder, List.class);
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-opensearch-starter</artifactId> </dependency>", "opensearch://clusterName[?options]", "opensearch:clusterName", "OpensearchComponent opensearchComponent = new OpensearchComponent(); opensearchComponent.setHostAddresses(\"opensearch-host:9200\"); camelContext.addComponent(\"opensearch\", opensearchComponent);", "OpenSearchComponent opensearchComponent = new OpenSearchComponent(); opensearchComponent.setHostAddresses(\"opensearch-host:9200\"); opensearchComponent.setUser(\"opensearchuser\"); opensearchComponent.setPassword(\"secure!!\"); camelContext.addComponent(\"opensearch\", opensearchComponent);", "from(\"direct:index\") .to(\"opensearch://opensearch?operation=Index&indexName=twitter\");", "<route> <from uri=\"direct:index\"/> <to uri=\"opensearch://opensearch?operation=Index&amp;indexName=twitter\"/> </route>", "Map<String, String> map = new HashMap<String, String>(); map.put(\"content\", \"test\"); String indexId = template.requestBody(\"direct:index\", map, String.class);", "from(\"direct:search\") .to(\"opensearch://opensearch?operation=Search&indexName=twitter\");", "<route> <from uri=\"direct:search\"/> <to uri=\"opensearch://opensearch?operation=Search&amp;indexName=twitter\"/> </route>", "String query = \"{\\\"query\\\":{\\\"match\\\":{\\\"doc.content\\\":\\\"new release of ApacheCamel\\\"}}}\"; HitsMetadata<?> response = template.requestBody(\"direct:search\", query, HitsMetadata.class);", "Map<String, Object> actualQuery = new HashMap<>(); actualQuery.put(\"doc.content\", \"new release of ApacheCamel\"); Map<String, Object> match = new HashMap<>(); match.put(\"match\", actualQuery); Map<String, Object> query = new HashMap<>(); query.put(\"query\", match); HitsMetadata<?> response = template.requestBody(\"direct:search\", query, HitsMetadata.class);", "from(\"direct:search\") .to(\"opensearch://opensearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000\");", "<route> <from uri=\"direct:search\"/> <to uri=\"opensearch://opensearch?operation=Search&amp;indexName=twitter&amp;useScroll=true&amp;scrollKeepAliveMs=30000\"/> </route>", "String query = \"{\\\"query\\\":{\\\"match\\\":{\\\"doc.content\\\":\\\"new release of ApacheCamel\\\"}}}\"; try (OpenSearchScrollRequestIterator response = template.requestBody(\"direct:search\", query, OpenSearchScrollRequestIterator.class)) { // do something smart with results }", "from(\"direct:search\") .to(\"opensearch://opensearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000\") .split() .body() .streaming() .to(\"mock:output\") .end();", "from(\"direct:multiSearch\") .to(\"opensearch://opensearch?operation=MultiSearch\");", "<route> <from uri=\"direct:multiSearch\"/> <to uri=\"opensearch://opensearch?operation=MultiSearch\"/> </route>", "MsearchRequest.Builder builder = new MsearchRequest.Builder().index(\"twitter\").searches( new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build(), new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build()); List<MultiSearchResponseItem<?>> response = template.requestBody(\"direct:multiSearch\", builder, List.class);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-opensearch-component-starter
Chapter 10. Tutorial: Configuring Microsoft Entra ID (formerly Azure Active Directory) as an identity provider
Chapter 10. Tutorial: Configuring Microsoft Entra ID (formerly Azure Active Directory) as an identity provider You can configure Microsoft Entra ID (formerly Azure Active Directory) as the cluster identity provider in Red Hat OpenShift Service on AWS (ROSA). This tutorial guides you to complete the following tasks: Register a new application in Entra ID for authentication. Configure the application registration in Entra ID to include optional and group claims in tokens. Configure the Red Hat OpenShift Service on AWS cluster to use Entra ID as the identity provider. Grant additional permissions to individual groups. 10.1. Prerequisites You created a set of security groups and assigned users by following the Microsoft documentation . 10.2. Registering a new application in Entra ID for authentication To register your application in Entra ID, first create the OAuth callback URL, then register your application. Procedure Create the cluster's OAuth callback URL by changing the specified variables and running the following command: Note Remember to save this callback URL; it will be required later in the process. USD domain=USD(rosa describe cluster -c <cluster_name> | grep "DNS" | grep -oE '\S+.openshiftapps.com') echo "OAuth callback URL: https://oauth.USD{domain}/oauth2callback/AAD" The "AAD" directory at the end of the OAuth callback URL must match the OAuth identity provider name that you will set up later in this process. Create the Entra ID application by logging in to the Azure portal, and select the App registrations blade . Then, select New registration to create a new application. Name the application, for example openshift-auth . Select Web from the Redirect URI dropdown and enter the value of the OAuth callback URL you retrieved in the step. After providing the required information, click Register to create the application. Select the Certificates & secrets sub-blade and select New client secret . Complete the requested details and store the generated client secret value. This secret is required later in this process. Important After initial setup, you cannot see the client secret. If you did not record the client secret, you must generate a new one. Select the Overview sub-blade and note the Application (client) ID and Directory (tenant) ID . You will need these values in a future step. 10.3. Configuring the application registration in Entra ID to include optional and group claims So that Red Hat OpenShift Service on AWS has enough information to create the user's account, you must configure Entra ID to give two optional claims: email and preferred_username . For more information about optional claims in Entra ID, see the Microsoft documentation . In addition to individual user authentication, Red Hat OpenShift Service on AWS provides group claim functionality. This functionality allows an OpenID Connect (OIDC) identity provider, such as Entra ID, to offer a user's group membership for use within Red Hat OpenShift Service on AWS. 10.3.1. Configuring optional claims You can configure the optional claims in Entra ID. Click the Token configuration sub-blade and select the Add optional claim button. Select the ID radio button. Select the email claim checkbox. Select the preferred_username claim checkbox. Then, click Add to configure the email and preferred_username claims your Entra ID application. A dialog box appears at the top of the page. Follow the prompt to enable the necessary Microsoft Graph permissions. 10.3.2. Configuring group claims (optional) Configure Entra ID to offer a groups claim. Procedure From the Token configuration sub-blade, click Add groups claim . To configure group claims for your Entra ID application, select Security groups and then click the Add . Note In this example, the group claim includes all of the security groups that a user is a member of. In a real production environment, ensure that the groups that the group claim only includes groups that apply to Red Hat OpenShift Service on AWS. 10.4. Configuring the Red Hat OpenShift Service on AWS cluster to use Entra ID as the identity provider You must configure Red Hat OpenShift Service on AWS to use Entra ID as its identity provider. Although ROSA offers the ability to configure identity providers by using OpenShift Cluster Manager, use the ROSA CLI to configure the cluster's OAuth provider to use Entra ID as its identity provider. Before configuring the identity provider, set the necessary variables for the identity provider configuration. Procedure Create the variables by running the following command: USD CLUSTER_NAME=example-cluster 1 USD IDP_NAME=AAD 2 USD APP_ID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy 3 USD CLIENT_SECRET=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 4 USD TENANT_ID=zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz 5 1 Replace this with the name of your ROSA cluster. 2 Replace this value with the name you used in the OAuth callback URL that you generated earlier in this process. 3 Replace this with the Application (client) ID. 4 Replace this with the Client Secret. 5 Replace this with the Directory (tenant) ID. Configure the cluster's OAuth provider by running the following command. If you enabled group claims, ensure that you use the --group-claims groups argument. If you enabled group claims, run the following command: USD rosa create idp \ --cluster USD{CLUSTER_NAME} \ --type openid \ --name USD{IDP_NAME} \ --client-id USD{APP_ID} \ --client-secret USD{CLIENT_SECRET} \ --issuer-url https://login.microsoftonline.com/USD{TENANT_ID}/v2.0 \ --email-claims email \ --name-claims name \ --username-claims preferred_username \ --extra-scopes email,profile \ --groups-claims groups If you did not enable group claims, run the following command: USD rosa create idp \ --cluster USD{CLUSTER_NAME} \ --type openid \ --name USD{IDP_NAME} \ --client-id USD{APP_ID} \ --client-secret USD{CLIENT_SECRET} \ --issuer-url https://login.microsoftonline.com/USD{TENANT_ID}/v2.0 \ --email-claims email \ --name-claims name \ --username-claims preferred_username \ --extra-scopes email,profile After a few minutes, the cluster authentication Operator reconciles your changes, and you can log in to the cluster by using Entra ID. 10.5. Granting additional permissions to individual users and groups When your first log in, you might notice that you have very limited permissions. By default, Red Hat OpenShift Service on AWS only grants you the ability to create new projects, or namespaces, in the cluster. Other projects are restricted from view. You must grant these additional abilities to individual users and groups. 10.5.1. Granting additional permissions to individual users Red Hat OpenShift Service on AWS includes a significant number of preconfigured roles, including the cluster-admin role that grants full access and control over the cluster. Procedure Grant a user access to the cluster-admin role by running the following command: USD rosa grant user cluster-admin \ --user=<USERNAME> 1 --cluster=USD{CLUSTER_NAME} 1 Provide the Entra ID username that you want to have cluster admin permissions. 10.5.2. Granting additional permissions to individual groups If you opted to enable group claims, the cluster OAuth provider automatically creates or updates the user's group memberships by using the group ID. The cluster OAuth provider does not automatically create RoleBindings and ClusterRoleBindings for the groups that are created; you are responsible for creating those bindings by using your own processes. To grant an automatically generated group access to the cluster-admin role, you must create a ClusterRoleBinding to the group ID. Procedure Create the ClusterRoleBinding by running the following command: USD oc create clusterrolebinding cluster-admin-group \ --clusterrole=cluster-admin \ --group=<GROUP_ID> 1 1 Provide the Entra ID group ID that you want to have cluster admin permissions. Now, any user in the specified group automatically receives cluster-admin access. 10.6. Additional resources For more information about how to use RBAC to define and apply permissions in Red Hat OpenShift Service on AWS, see the Red Hat OpenShift Service on AWS documentation .
[ "domain=USD(rosa describe cluster -c <cluster_name> | grep \"DNS\" | grep -oE '\\S+.openshiftapps.com') echo \"OAuth callback URL: https://oauth.USD{domain}/oauth2callback/AAD\"", "CLUSTER_NAME=example-cluster 1 IDP_NAME=AAD 2 APP_ID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy 3 CLIENT_SECRET=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 4 TENANT_ID=zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz 5", "rosa create idp --cluster USD{CLUSTER_NAME} --type openid --name USD{IDP_NAME} --client-id USD{APP_ID} --client-secret USD{CLIENT_SECRET} --issuer-url https://login.microsoftonline.com/USD{TENANT_ID}/v2.0 --email-claims email --name-claims name --username-claims preferred_username --extra-scopes email,profile --groups-claims groups", "rosa create idp --cluster USD{CLUSTER_NAME} --type openid --name USD{IDP_NAME} --client-id USD{APP_ID} --client-secret USD{CLIENT_SECRET} --issuer-url https://login.microsoftonline.com/USD{TENANT_ID}/v2.0 --email-claims email --name-claims name --username-claims preferred_username --extra-scopes email,profile", "rosa grant user cluster-admin --user=<USERNAME> 1 --cluster=USD{CLUSTER_NAME}", "oc create clusterrolebinding cluster-admin-group --clusterrole=cluster-admin --group=<GROUP_ID> 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/tutorials/cloud-experts-entra-id-idp
Architecture
Architecture OpenShift Container Platform 4.12 An overview of the architecture for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "openshift-install create ignition-configs --dir USDHOME/testconfig", "cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },", "echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode", "This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service", "\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",", "USD oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m", "oc describe machineconfigs 01-worker-container-runtime | grep Path:", "Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown", "oc new-project my-webhook-namespace 1", "apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server", "oc auth reconcile -f rbac.yaml", "apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: \"true\" spec: selector: matchLabels: server: \"true\" template: metadata: name: server labels: server: \"true\" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert", "oc apply -f webhook-daemonset.yaml", "apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2", "oc apply -f webhook-secret.yaml", "apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: \"true\" ports: - port: 443 1 targetPort: 8443 2", "oc apply -f webhook-service.yaml", "apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7", "oc apply -f webhook-crd.yaml", "apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1", "oc apply -f webhook-api-service.yaml", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - \"*\" resources: - projectrequests - operations: - CREATE apiGroups: - \"\" apiVersions: - \"*\" resources: - namespaces failurePolicy: Fail", "oc apply -f webhook-config.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/architecture/index
Chapter 30. Authentication and Interoperability
Chapter 30. Authentication and Interoperability sssd component, BZ# 1081046 The accountExpires attribute that SSSD uses to see whether an account has expired is not replicated to the Global Catalog by default. As a result, users with expired accounts can be allowed to log in when using GSSAPI authentication. To work around this problem, the Global Catalog support can be disabled by specifying ad_enable_gc=False in the sssd.conf file. With this setting, users with expired accounts will be denied access when using GSSAPI authentication. Note that SSSD connects to each LDAP server individually in this scenario, which can increase the connection count. ipa component, BZ# 1004156 When DNS support is being added for an Identity Management server (for example, by using ipa-dns-install or by using the --setup-dns option with the ipa-server-install or ipa-replica-install commands), the script adds a host name of a new Identity Management DNS server to the list of name servers in the primary Identity Management DNS zone (via DNS NS record). However, it does not add the DNS name server record to other DNS zones served by the Identity Management servers. As a consequence, the list of name servers in the non-primary DNS zones has only a limited set of Identity Management name servers serving the DNS zone (only one, without user intervention). When the limited set of Identity Management name servers is not available, these DNS zones are not resolvable. To work around this problem, manually add new DNS name server records to all non-primary DNS zones when a new Identity Management replica is being added. Also, manually remove such DNS name server records when the replica is being decommissioned. Non-primary DNS zones can maintain higher availability by having a manually maintained set of Identity Management name servers serving it. ipa component, BZ#971384 The default Unlock user accounts permission does not include the nsaccountlock attribute, which is necessary for a successful unlocking of a user entry. Consequently, a privileged user with this permission assigned cannot unlock another user, and errors like the following are displayed: To work around this problem, add nssacountlock to the list of allowed attributes in the aforementioned permission by running the following command: As a result, users with the Unlock user accounts permission assigned can unlock other users. ipa component, BZ# 973195 There are multiple problems across different tools used in the Identity Management installation, which prevents installation of user-provided certificates with intermediate certificate authority ( CA ). One of the errors is that incorrect trust flags are assigned to the intermediate CA certificate when importing a PKCS#12 file. Consequently, the Identity Management server installer fails due to an incomplete trust chain that is returned for Identity Management services. There is no known workaround, certificates not issued by the embedded Certificate Authority must not contain an intermediate CA in their trust chain. ipa component , BZ# 988473 Access control to lightweight directory access protocol ( LDAP ) objects representing trust with Active Directory (AD) is given to the Trusted Admins group in Identity Management. In order to establish the trust, the Identity Management administrator should belong to a group which is a member of the "Trusted Admins" group and this group should have relative identifier (RID) 512 assigned. To ensure this, run the ipa-adtrust-install command and then the ipa group-show admins --all command to verify that the "ipantsecurityidentifier" field contains a value ending with the "-512" string. If the field does not end with "-512", use the ipa group-mod admins --setattr=ipantsecurityidentifier=SID command, where SID is the value of the field from the ipa group-show admins --all command output with the last component value (-XXXX) replaced by the "-512" string. ipa component, BZ# 1084018 Red Hat Enterprise Linux 7 contains an updated version of slapi-nis , a Directory Server plug-in, which allows users of Identity Management and the Active Directory service to authenticate on legacy clients. However, the slapi-nis component only enables identity and authentication services, but does not allow users to change their password. As a consequence, users logged to legacy clients via slapi-nis compatibility tree can change their password only via the Identity Management Server Self-Service Web UI page or directly in Active Directory. ipa component, BZ# 1060349 The ipa host-add command does not verify the existence of AAAA records. As a consequence, ipa host-add fails if no A record is available for the host, although an AAAA record exists. To work around this problem, run ipa host-add with the --force option. ipa component, BZ# 1081626 An IPA master is uninstalled while SSL certificates for services other than IPA servers are tracked by the certmonger service. Consequently, an unexpected error can occur, and the uninstallation fails. To work around this problem, start certmonger , and run the ipa-getcert command to list the tracked certificates. Then run the ipa-getcert stop-tracking -i <Request ID> command to stop certmonger from tracking the certificates, and run the IPA uninstall script again. ipa component, BZ# 1088683 The ipa-client-install command does not process the --preserve-sssd option correctly when generating the IPA domain configuration in the sssd.conf file. As a consequence, the original configuration of the IPA domain is overwritten. To work around this problem, review sssd.conf after running ipa-client-install to identify and manually fix any unwanted changes. certmonger component, BZ# 996581 The directory containing a private key or certificate can have an incorrect SELinux context. As a consequence, the ipa-getcert request -k command fails, and an unhelpful error message is displayed. To work around this problem, set the SELinux context on the directory containing the certificate and the key to cert_t . You can resubmit an existing certificate request by running the ipa-getcert resubmit -i <Request ID> command. sssd component, BZ# 1103249 Under certain circumstances, the algorithm in the Privilege Attribute Certificate (PAC) responder component of the System Security Services Daemon (SSSD) does not effectively handle users who are members of a large number of groups. As a consequence, logging from Windows clients to Red Hat Enterprise Linux clients with Kerberos single sign-on (SSO) can be noticeably slow. There is currently no known workaround available. ipa component, BZ# 1033357 The ipactl restart command requires the directory server to be running. Consequently, if this condition is not met, ipactl restart fails with an error message. To work around this problem, use the ipactl start command to start the directory server before executing ipactl restart . Note that the ipactl status command can be used to verify if the directory server is running. pki-core component, BZ#1085105 The certificate subsystem fails to install if the system language is set to Turkish. To work around this problem, set the system language to English by putting the following line in the /etc/sysconfig/i18n file: LANG="en_US.UTF-8" Also, remove any other "LANG=" entries in /etc/sysconfig/i18n , then reboot the system. After reboot, you can successfully run ipa-server-install , and the original contents of /etc/sysconfig/i18n may be restored. ipa component, BZ# 1020563 The ipa-server-install and ipa-replica-install commands replace the list of NTP servers in the /etc/ntp.conf file with Red Hat Enterprise Linux default servers. As a consequence, NTP servers configured before installing IPA are not contacted, and servers from rhel.pool.ntp.org are contacted instead. If those default servers are unreachable, the IPA server does not synchronize its time via NTP. To work around this problem, add any custom NTP servers to /etc/ntp.conf , and remove the default Red Hat Enterprise Linux servers if required. The configured servers are now used for time synchronization after restarting the NTP service by running the systemctl restart ntpd.service command. gnutls component, BZ# 1084080 The gnutls utility fails to generate a non-encrypted private key when the user enters an empty password. To work around this problem, use the certtool command with the password option as follows: ~]USD certtool --generate-privkey --pkcs8 --password "" --outfile pkcs8.key
[ "ipa: ERROR: Insufficient access: Insufficient 'write' privilege to the 'nsAccountLock' attribute of entry 'uid=user,cn=users,cn=accounts,dc=example,dc=com'.", "~]# ipa permission-mod \"Unlock user accounts\" --attrs={krbLastAdminUnlock,krbLoginFailedCount,nsaccountlock}" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/known-issues-authentication_and_interoperability
10.2.2. Enabling FIPS Mode for Applications Using NSS
10.2.2. Enabling FIPS Mode for Applications Using NSS The procedure for enabling FIPS mode on Red Hat Enterprise Linux systems described in Section 10.2.1, "Enabling FIPS Mode" does not affect the FIPS state of Network Security Services (NSS), and thus does not affect applications using NSS. When required, the user can switch any NSS application to FIPS mode using the following command: Replace dir with the directory specifying the NSS database used by the application. If more than one NSS application uses this database, all these applications will be switched into FIPS mode. The applications have to be restarted for the NSS FIPS mode to take effect. Provided that the nss-sysinit package is installed, and the application whose NSS database you need to locate opens the /etc/pki/nssdb file, the path to the user NSS database is ~/.pki/nssdb . To enable FIPS mode for the Firefox web browser and the Thunderbird email client, go to Edit Preferences Advanced Certificates Security Devices Enable FIPS .
[ "~]# modutil -fips true -dbdir dir" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/enabling-fips-mode-for-apps-using-nss
Chapter 1. Plugins in Red Hat Developer Hub
Chapter 1. Plugins in Red Hat Developer Hub The Red Hat Developer Hub (RHDH) application offers a unified platform with various plugins. Using the plugin ecosystem within the RHDH application, you can access any kind of development infrastructure or software development tool. Plugins are modular extensions for RHDH that extend functionality, streamline development workflows, and improve the developer experience. You can add and configure plugins in RHDH to access various software development tools. Each plugin is designed as a self-contained application and can incorporate any type of content. The plugins utilize a shared set of platform APIs and reusable UI components. Plugins can also retrieve data from external sources through APIs or by relying on external modules to perform the tasks. RHDH provides both static and dynamic plugins that enhance its functionality. Static plugins are integrated into the core of the RHDH application, while dynamic plugins can be sideloaded into your Developer Hub instance without the need to recompile your code or rebuild the container. To install or update a static plugin you must update your RHDH application source code and rebuild the application and container image. To install or update a dynamic plugin, you must restart your RHDH application source code after installing the plugin. You can also import your own custom-built or third-party plugins or create new features using dynamic plugins. Dynamic plugins boost modularity and scalability by enabling more flexible and efficient functionality loading, significantly enhancing the developer experience and customization of your RHDH instance. 1.1. Dynamic plugins in Red Hat Developer Hub You can use RHDH dynamic plugins in environments where flexibility, scalability, and customization are key. Using dynamic plugins in RHDH provides: Modularity and extensibility You can add or modify features without altering the core RHDH application. This modular approach makes it easier to extend functionality as needs evolve. Customization You can tailor RHDH to fit specific workflows and use cases, enhancing the overall user experience. Reduced maintenance and update overhead You can deploy the updates or new features independently of the main RHDH codebase, reducing the risks and efforts associated with maintaining and updating the platform. Faster iteration You can create and test new features more rapidly as plugins, encouraging experimentation and enabling you to quickly iterate based on feedback. Improved collaboration You can share plugins across teams or even externally. This sharing can foster collaboration and reduce duplication of effort, as well as help establish best practices across an organization. Scalability As organizations grow, their needs become complex. Dynamic plugins enable RHDH to scale alongside such complex needs, accommodating an increasing number of users and services. Ecosystem growth Fostering the development of plugins can create a dynamic ecosystem around RHDH. This community can contribute to plugins that cater to different needs, thereby enhancing the platform. Security and compliance You can develop plugins with specific security and compliance requirements in mind, ensuring that RHDH installations meet the necessary standards without compromising the core application. Overall, the use of dynamic plugins in RHDH promotes a flexible, adaptable, and sustainable approach to managing and scaling development infrastructure. 1.2. Comparing dynamic plugins to static plugins Static plugins are built into the core of the RHDH application. Installing or updating a static plugin requires a restart of the application after installing the plugin. The following table provides a comparison between static and dynamic plugins in RHDH. Feature Static plugins Dynamic plugins Integration Built into the core application. Loaded at runtime, separate from the core. Flexibility Requires core changes to add or update features. Add or update features without core changes. Development speed Slower, requires a complete rebuild for new features. Faster, deploy new functionalities quickly. Customization Limited to predefined options. Easy to tailor platform by loading specific plugins. Maintenance More complex due to tightly coupled features. Enhanced by modular architecture. Resource use All features loaded at startup. Only necessary plugins loaded dynamically. Innovation Slower experimentation due to rebuild cycles. Quick experimentation with new plugins.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/introduction_to_plugins/con-rhdh-plugins
Chapter 15. Removing an ID range after removing a trust to AD
Chapter 15. Removing an ID range after removing a trust to AD If you have removed a trust between your IdM and Active Directory (AD) environments, you might want to remove the ID range associated with it. Warning IDs allocated to ID ranges associated with trusted domains might still be used for ownership of files and directories on systems enrolled into IdM. If you remove the ID range that corresponds to an AD trust that you have removed, you will not be able to resolve the ownership of any files and directories owned by AD users. Prerequisites You have removed a trust to an AD environment. Procedure Display all the ID ranges that are currently in use: Identify the name of the ID range associated with the trust you have removed. The first part of the name of the ID range is the name of the trust, for example AD.EXAMPLE.COM_id_range . Remove the range: Restart the SSSD service to remove references to the ID range you have removed.
[ "ipa idrange-find", "ipa idrange-del AD.EXAMPLE.COM_id_range", "systemctl restart sssd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_trust_between_idm_and_ad/proc_removing-an-id-range-after-removing-a-trust-to-ad_installing-trust-between-idm-and-ad
Chapter 13. Increasing the size of an XFS file system
Chapter 13. Increasing the size of an XFS file system As a system administrator, you can increase the size of an XFS file system to make a complete use of a larger storage capacity. Important It is not currently possible to decrease the size of XFS file systems. 13.1. Increasing the size of an XFS file system with xfs_growfs This procedure describes how to grow an XFS file system using the xfs_growfs utility. Prerequisites Ensure that the underlying block device is of an appropriate size to hold the resized file system later. Use the appropriate resizing methods for the affected block device. Mount the XFS file system. Procedure While the XFS file system is mounted, use the xfs_growfs utility to increase its size: Replace file-system with the mount point of the XFS file system. With the -D option, replace new-size with the desired new size of the file system specified in the number of file system blocks. To find out the block size in kB of a given XFS file system, use the xfs_info utility: Without the -D option, xfs_growfs grows the file system to the maximum size supported by the underlying device. Additional resources xfs_growfs(8) man page on your system.
[ "xfs_growfs file-system -D new-size", "xfs_info block-device data = bsize=4096" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/increasing-the-size-of-an-xfs-file-system_managing-file-systems
Chapter 1. Introduction to director
Chapter 1. Introduction to director The Red Hat OpenStack Platform (RHOSP) director is a toolset for installing and managing a complete RHOSP environment. Director is based primarily on the OpenStack project TripleO. With director you can install a fully-operational, lean, and robust RHOSP environment that can provision and control bare metal systems to use as OpenStack nodes. Director uses two main concepts: an undercloud and an overcloud. First you install the undercloud, and then use the undercloud as a tool to install and configure the overcloud. 1.1. Understanding the undercloud The undercloud is the main management node that contains the Red Hat OpenStack Platform director toolset. It is a single-system OpenStack installation that includes components for provisioning and managing the OpenStack nodes that form your OpenStack environment (the overcloud). The components that form the undercloud have multiple functions: Environment planning The undercloud includes planning functions that you can use to create and assign certain node roles. The undercloud includes a default set of node roles that you can assign to specific nodes: Compute, Controller, and various Storage roles. You can also design custom roles. Additionally, you can select which Red Hat OpenStack Platform services to include on each node role, which provides a method to model new node types or isolate certain components on their own host. Bare metal system control The undercloud uses the out-of-band management interface, usually Intelligent Platform Management Interface (IPMI), of each node for power management control and a PXE-based service to discover hardware attributes and install OpenStack on each node. You can use this feature to provision bare metal systems as OpenStack nodes. For a full list of power management drivers, see Chapter 30, Power management drivers . Orchestration The undercloud contains a set of YAML templates that represent a set of plans for your environment. The undercloud imports these plans and follows their instructions to create the resulting OpenStack environment. The plans also include hooks that you can use to incorporate your own customizations as certain points in the environment creation process. Undercloud components The undercloud uses OpenStack components as its base tool set. Each component operates within a separate container on the undercloud: OpenStack Identity (keystone) - Provides authentication and authorization for the director components. OpenStack Bare Metal (ironic) and OpenStack Compute (nova) - Manages bare metal nodes. OpenStack Networking (neutron) and Open vSwitch - Control networking for bare metal nodes. OpenStack Image Service (glance) - Stores images that director writes to bare metal machines. OpenStack Orchestration (heat) and Puppet - Provides orchestration of nodes and configuration of nodes after director writes the overcloud image to disk. OpenStack Workflow Service (mistral) - Provides a set of workflows for certain director-specific actions, such as importing and deploying plans. OpenStack Messaging Service (zaqar) - Provides a messaging service for the OpenStack Workflow Service. OpenStack Object Storage (swift) - Provides object storage for various OpenStack Platform components, including: Image storage for OpenStack Image Service Introspection data for OpenStack Bare Metal Deployment plans for OpenStack Workflow Service 1.2. Understanding the overcloud The overcloud is the resulting Red Hat OpenStack Platform (RHOSP) environment that the undercloud creates. The overcloud consists of multiple nodes with different roles that you define based on the OpenStack Platform environment that you want to create. The undercloud includes a default set of overcloud node roles: Controller Controller nodes provide administration, networking, and high availability for the OpenStack environment. A recommended OpenStack environment contains three Controller nodes together in a high availability cluster. A default Controller node role supports the following components. Not all of these services are enabled by default. Some of these components require custom or pre-packaged environment files to enable: OpenStack Dashboard (horizon) OpenStack Identity (keystone) OpenStack Compute (nova) API OpenStack Networking (neutron) OpenStack Image Service (glance) OpenStack Block Storage (cinder) OpenStack Object Storage (swift) OpenStack Orchestration (heat) OpenStack Shared File Systems (manila) OpenStack Bare Metal (ironic) OpenStack Load Balancing-as-a-Service (octavia) OpenStack Key Manager (barbican) MariaDB Open vSwitch Pacemaker and Galera for high availability services. Compute Compute nodes provide computing resources for the OpenStack environment. You can add more Compute nodes to scale out your environment over time. A default Compute node contains the following components: OpenStack Compute (nova) KVM/QEMU OpenStack Telemetry (ceilometer) agent Open vSwitch Storage Storage nodes provide storage for the OpenStack environment. The following list contains information about the various types of Storage node in RHOSP: Ceph Storage nodes - Used to form storage clusters. Each node contains a Ceph Object Storage Daemon (OSD). Additionally, director installs Ceph Monitor onto the Controller nodes in situations where you deploy Ceph Storage nodes as part of your environment. Block storage (cinder) - Used as external block storage for highly available Controller nodes. This node contains the following components: OpenStack Block Storage (cinder) volume OpenStack Telemetry agents Open vSwitch. Object storage (swift) - These nodes provide an external storage layer for OpenStack Swift. The Controller nodes access object storage nodes through the Swift proxy. Object storage nodes contain the following components: OpenStack Object Storage (swift) storage OpenStack Telemetry agents Open vSwitch. 1.3. Understanding high availability in Red Hat OpenStack Platform The Red Hat OpenStack Platform (RHOSP) director uses a Controller node cluster to provide highly available services to your OpenStack Platform environment. For each service, director installs the same components on all Controller nodes and manages the Controller nodes together as a single service. This type of cluster configuration provides a fallback in the event of operational failures on a single Controller node. This provides OpenStack users with a certain degree of continuous operation. The OpenStack Platform director uses some key pieces of software to manage components on the Controller node: Pacemaker - Pacemaker is a cluster resource manager. Pacemaker manages and monitors the availability of OpenStack components across all nodes in the cluster. HAProxy - Provides load balancing and proxy services to the cluster. Galera - Replicates the RHOSP database across the cluster. Memcached - Provides database caching. Note From version 13 and later, you can use director to deploy High Availability for Compute Instances (Instance HA). With Instance HA you can automate evacuating instances from a Compute node when the Compute node fails. 1.4. Understanding containerization in Red Hat OpenStack Platform Each OpenStack Platform service on the undercloud and overcloud runs inside an individual Linux container on their respective node. This containerization provides a method to isolate services, maintain the environment, and upgrade Red Hat OpenStack Platform (RHOSP). Red Hat OpenStack Platform 16.2 supports installation on the Red Hat Enterprise Linux 8.4 operating system. Red Hat Enterprise Linux 8.4 no longer includes Docker and provides a new set of tools to replace the Docker ecosystem. This means OpenStack Platform 16.2 replaces Docker with these new tools for OpenStack Platform deployment and upgrades. Podman Pod Manager (Podman) is a container management tool. It implements almost all Docker CLI commands, not including commands related to Docker Swarm. Podman manages pods, containers, and container images. One of the major differences between Podman and Docker is that Podman can manage resources without a daemon running in the background. For more information about Podman, see the Podman website . Buildah Buildah specializes in building Open Containers Initiative (OCI) images, which you use in conjunction with Podman. Buildah commands replicate the contents of a Dockerfile. Buildah also provides a lower-level coreutils interface to build container images, so that you do not require a Dockerfile to build containers. Buildah also uses other scripting languages to build container images without requiring a daemon. For more information about Buildah, see the Buildah website . Skopeo Skopeo provides operators with a method to inspect remote container images, which helps director collect data when it pulls images. Additional features include copying container images from one registry to another and deleting images from registries. Red Hat supports the following methods for managing container images for your overcloud: Pulling container images from the Red Hat Container Catalog to the image-serve registry on the undercloud and then pulling the images from the image-serve registry. When you pull images to the undercloud first, you avoid multiple overcloud nodes simultaneously pulling container images over an external connection. Pulling container images from your Satellite 6 server. You can pull these images directly from the Satellite because the network traffic is internal. This guide contains information about configuring your container image registry details and performing basic container operations. 1.5. Working with Ceph Storage in Red Hat OpenStack Platform It is common for large organizations that use Red Hat OpenStack Platform (RHOSP) to serve thousands of clients or more. Each OpenStack client is likely to have their own unique needs when consuming block storage resources. Deploying glance (images), cinder (volumes), and nova (Compute) on a single node can become impossible to manage in large deployments with thousands of clients. Scaling OpenStack externally resolves this challenge. However, there is also a practical requirement to virtualize the storage layer with a solution like Red Hat Ceph Storage so that you can scale the RHOSP storage layer from tens of terabytes to petabytes, or even exabytes of storage. Red Hat Ceph Storage provides this storage virtualization layer with high availability and high performance while running on commodity hardware. While virtualization might seem like it comes with a performance penalty, Ceph stripes block device images as objects across the cluster, meaning that large Ceph Block Device images have better performance than a standalone disk. Ceph Block devices also support caching, copy-on-write cloning, and copy-on-read cloning for enhanced performance. For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage . Note For multi-architecture clouds, Red Hat supports only pre-installed or external Ceph implementation. For more information, see Integrating an Overcloud with an Existing Red Hat Ceph Cluster and Configuring the CPU architecture for the overcloud .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_introduction-to-director
4.120. kexec-tools
4.120. kexec-tools 4.120.1. RHSA-2011:1532 - Moderate: kexec-tools security, bug fix, and enhancement update An updated kexec-tools package that fixes three security issues, various bugs, and adds several enhancements is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System ( CVSS ) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. kexec-tools allows a Linux kernel to boot from the context of a running kernel. Security Fixes CVE-2011-3588 Kdump used the Secure Shell ( SSH ) StrictHostKeyChecking=no option when dumping to SSH targets, causing the target kdump server's SSH host key not to be checked. This could make it easier for a man-in-the-middle attacker on the local network to impersonate the kdump SSH target server and possibly gain access to sensitive information in the vmcore dumps. CVE-2011-3589 mkdumprd created initial RAM disk (initrd) files with world-readable permissions. A local user could possibly use this flaw to gain access to sensitive information, such as the private SSH key used to authenticate to a remote server when kdump was configured to dump to an SSH target. CVE-2011-3590 mkdumprd included unneeded sensitive files (such as all files from the /root/.ssh/ directory and the host's private SSH keys) in the resulting initrd . This could lead to an information leak when initrd files were previously created with world-readable permissions. Note With this update, only the SSH client configuration, known hosts files, and the SSH key configured via the newly introduced sshkey option in /etc/kdump.conf are included in the initrd . The default is the key generated when running the service kdump propagate command, /root/.ssh/kdump_id_rsa . Red Hat would like to thank Kevan Carstensen for reporting these issues. Bug Fixes BZ# 681796 Kdump is a kexec based crash dumping mechanism for Linux. Root System Description Pointer ( RSDP ) is a data structure used in the ACPI programming interface. Kdump uses kexec to boot to a second kernel, the "dump-capture" or "crash kernel", when a dump of the system kernel's memory needs to be taken. On systems using Extensible Firmware Interface ( EFI ), attempting to boot a second kernel using kdump failed, the dump-capture kernel became unresponsive and the following error message was logged. With this update, a new parameter, acpi_rsdp , has been added to the noefi kernel command. Now, if EFI is detected, a command is given to the second kernel, in the format, noefi acpi_rsdp=X , not to use EFI and simultaneously passes the address of RSDP to the second kernel. The second kernel now boots successfully on EFI machines. BZ# 693025 To reduce the size of the vmcore dump file, kdump allows you to specify an external application (that is, a core collector) to compress the data. The core collector was not enabled by default when dumping to a secure location via SSH . Consequently, if users had not specified an argument for core_collector in kdump.conf, when kdump was configured to dump kernel data to a secure location using SSH , it generated a complete vmcore, without removing free pages. With this update, the default core collector will be makedumpfile when kdump is configured to use SSH . As a result, the vmcore dump file is now compressed by default. BZ# 707805 Previously, the mkdumprd utility failed to parse the /etc/mdadm.conf configuration file. As a consequence, mkdumprd failed to create an initial RAM disk file system ( initrd ) for kdump crash recovery and the kdump service failed to start. With this update, mkdumprd has been modified so that it now parses the configuration file and builds initrd correctly. The kdump service now starts as expected. BZ# 708503 In order for Coverity to scan defects in downstream patches separately, it is necessary to make a clean raw build of the source code without patches. However, kexec-tools would not build without downstream patches. With this update, by adding a specified patch in kexec-tools spec file, kexec-tools can now be built from source in the scenario described. BZ# 709441 On 64-bit PowerPC-based systems with more than 1 TB of RAM, the kexec-tools utility terminated unexpectedly with a segmentation fault when kdump was started, thus preventing crash kernel capture. With this update, the problem has been fixed, kexec-tools no longer crashes, and kdump can now be used on a system with greater than 1 TB of physical memory. BZ# 719105 The mkdumprd utility creates an initial RAM disk file system ( initrd ) for use in conjunction with the booting of a second kernel within the kdump framework for crash recovery. Prior to this update, mkdumprd became unresponsive when the running kernel was not the same as the target kernel. With this update the problem has been fixed and mkdumprd no longer hangs in the scenario described. BZ# 731236 A regression caused the following erroneous error message to be displayed when kdump was setting up Ethernet network connections in order to reach a remote dump target: A patch has been applied to correct the problem and the error no longer occurs in the scenario described. BZ# 731394 During kdump start up, a check was made to see if the amount of RAM the currently running kernel was using was more than 70% of the amount of RAM reserved for kdump . If the memory in use was greater than 70% of the memory reserved, the following error message was displayed. Due to improvements in conserving memory in the kexec kernel the warning is no longer considered valid. This update removes the warning. BZ# 739050 Previously, if kexec-tools was installed and kdump was not running, installing the fence-agents package caused the following erroneous error message: This update corrects the kexec-tools spec file and the erroneous error message no longer appears. BZ# 746207 Removing kexec-tools on IBM System z resulted in the following error, even though the package was successfully removed. With this update, changes have been made to the kexec-tools spec file and the erroneous error message no longer appears. BZ# 747233 When providing firmware at operating system install time, supplied as part of the Driver Update program ( DUP ), the installation completed successfully but the operating system would fail on reboot. An error message in the following format was displayed: With this update, a check for the directory containing the DUP supplied firmware is made and the problem no longer occurs. Enhancements BZ# 585332 With large memory configurations, some machines take a long time to dump state information when a kernel panic occurs. The cluster software sometimes forced a reboot before the dump completed. With this update, co-ordination between kdump and cluster fencing for long kernel panic dumps is added. BZ# 598067 A new configuration option in kdump.conf , force_rebuild , has been added. When enabled, this option forces the kdump init script to rebuild initrd every time the system starts, thus ensuring kdump has enough storage space on each system start-up. BZ# 725484 On x86, AMD64 & Intel 64 platforms kexec-tools now uses nr_cpus=1 rather than maxcpus=1 to save memory required by the second kernel. PowerPC platforms currently cannot handle this feature. BZ# 727892 A warning was added to use maxcpus=1 instead of nr_cpus=1 for older kernels (see the enhancement above). BZ# 734528 Kdump has been provided with an option so that memory usage can be logged in the second kernel at various stages for debugging memory consumption issues. The second kernel memory usage debugging capability can be enabled via the new kdump.conf debug_mem_level option. BZ# 740275 , BZ# 740277 With this update, kdump support for dumping core to ext4 file systems, and also to XFS file systems on data disks (but not the root disk) has been added. Note BZ# 740278 With this update, kdump support for dumping core to Btrfs file systems has been added. Note BZ# 748748 Kdump did not check the return code of the mount command. Consequently, when the command mount -t debugfs debug /sys/kernel/debug was issued in the kdump service script, if the file system was already mounted, the message returned was erroneously logged as an error message. With this update, the logic in the kdump service script has been improved and the kdump service script now functions as expected. Users of kexec-tools should upgrade to this updated package, which contains backported patches to resolve these issues and add these enhancements. 4.120.2. RHBA-2012:0479 - kexec-tools bug fix update Updated kexec-tools packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The kexec-tools package contains the /sbin/kexec binary and utilities that together form the user-space component of the kernel's kexec feature. The /sbin/kexec binary facilitates a new kernel to boot using the kernel's kexec feature either on a normal or a panic reboot. The kexec fastboot mechanism allows booting a Linux kernel from the context of an already running kernel. Bug Fix BZ# 773358 When running kdump after a kernel crash on the system using the ext4 file systems, the kdump initrd could have been created with the zero byte size. This happened because the system waits for several seconds before writing the changes to the disk when using the ext4 file system. Consequently, the kdump initial root file system (rootfs) could not have been mounted and kdump failed. This update modifies kexec-tools to perform the sync operations after creating the initrd. This ensures that initrd is properly written to the disk before trying to mount rootfs so that kdump now successfully proceeds and captures a core dump. Enhancement BZ# 808466 The kdump utility does not support Xen para-virtualized (PV) drivers on Hardware Virtualized Machine (HVM) guests in Red Hat Enterprise Linux 6. Therefore, kdump failed to start if the guest had loaded PV drivers. This update modifies underlying code to allow kdump to start without PV drivers on HVM guests configured with PV drivers. All users of kexec-tools are advised to upgrade to these updated packages, which fix this bug add this enhancement.
[ "ACPI Error: A valid RSDP was not found", "sed: /etc/cluster_iface: No such file or directory", "Your running kernel is using more than 70% of the amount of space you reserved for kdump, you should consider increasing your crashkernel reservation", "Non-fatal <unknown> scriptlet failure in rpm package", "error reading information on service kdump: No such file or directory", "cp: cannot stat `/lib/firmware/*': No such file or directory", "For XFS, the XFS layer product needs to be installed. Layered products are those not included by default in the base Red Hat Enterprise Linux operating system.", "BusyBox's \"findfs\" utility does not yet support Btrfs, so UUID/LABEL resolving does not work. Avoid using UUID/LABEL syntax when dumping core to Btrfs file systems. Btrfs itself is still considered experimental; refer to Red Hat Technical Notes." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/kexec-tools
Chapter 3. Setting up Red Hat Ansible Lightspeed for your organization
Chapter 3. Setting up Red Hat Ansible Lightspeed for your organization As a Red Hat customer portal administrator, you must configure Red Hat Ansible Lightspeed to connect to your IBM watsonx Code Assistant instance. This chapter provides information about configuring both the Red Hat Ansible Lightspeed cloud service and on-premise deployment. 3.1. Configuration requirements 3.1.1. Licensing requirements Red Hat Ansible Lightspeed cloud service To use the Red Hat Ansible Lightspeed cloud service, you must meet one of the following requirements: Your organization has a trial or paid subscription to both the Red Hat Ansible Automation Platform and IBM watsonx Code Assistant. Your organization has a trial or paid subscription to the Red Hat Ansible Automation Platform, and you have a Red Hat Ansible Lightspeed trial account. Note A Red Hat Ansible Lightspeed trial account does not require an IBM watsonx Code Assistant subscription. Red Hat Ansible Lightspeed on-premise deployment To use an on-premise deployment of Red Hat Ansible Lightspeed, your organization must have the following subscriptions: A trial or paid subscription to the Red Hat Ansible Automation Platform An installation of IBM watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data 3.1.2. Setup requirements To set up Red Hat Ansible Lightspeed for your organization, you need the following IBM watsonx Code Assistant information: API key A unique API key authenticates all requests made from Red Hat Ansible Lightspeed to IBM watsonx Code Assistant. Each Red Hat organization with a valid Ansible Automation Platform subscription must have a configured API key. When an authenticated RH-SSO user creates a task request in Red Hat Ansible Lightspeed, the API key associated with the user's Red Hat organization is used to authenticate the request to IBM watsonx Code Assistant. Model ID A unique model ID identifies an IBM watsonx Code Assistant model in your IBM Cloud account. The model ID that you configure in the Ansible Lightspeed administrator portal is used as the default model, and can be accessed by all Ansible Lightspeed users within your organization. Important You must configure both the API key and the model ID when you are initially configuring Red Hat Ansible Lightspeed. 3.2. Setting up Red Hat Ansible Lightspeed cloud service As a Red Hat customer portal administrator, you must configure Red Hat Ansible Lightspeed cloud service to connect to your IBM watsonx Code Assistant instance. 3.2.1. Logging in to the Ansible Lightspeed administrator portal Use the Ansible Lightspeed administrator portal to connect Red Hat Ansible Lightspeed to IBM watsonx Code Assistant. Prerequisites You have organization administrator privileges to a Red Hat Customer Portal organization with a valid Red Hat Ansible Automation Platform subscription. Procedure Log in to the Ansible Lightspeed portal as an organization administrator. Click Log in Log in with Red Hat . Enter your Red Hat account username and password. The Ansible Lightspeed Service uses Red Hat Single Sign-On (RH-SSO) for authentication. As part of the authentication process, the Ansible Lightspeed Service checks whether your organization has an active Ansible Automation Platform subscription. On successful authentication, the login screen is displayed along with your username and your assigned user role. From the login screen, click Admin Portal . You are redirected to the Red Hat Ansible Lightspeed with IBM watsonx Code Assistant administrator portal where you can connect Red Hat Ansible Lightspeed to your IBM watsonx Code Assistant instance. 3.2.2. Configuring Red Hat Ansible Lightspeed cloud service Use this procedure to configure the Red Hat Ansible Lightspeed cloud service. Prerequisites You have obtained an API key and a model ID from IBM watsonx Code Assistant that you want to use in Red Hat Ansible Lightspeed. For information about how to obtain an API key and model ID from IBM watsonx Code Assistant, see the IBM watsonx Code Assistant documentation . Procedure Log in to the Ansible Lightspeed portal as an organization administrator. From the login screen, click Admin Portal . Specify the API key of your IBM watsonx Code Assistant instance: Under IBM Cloud API Key , click Add API key . A screen to enter the API Key is displayed. Enter the API Key. Optional: Click Test to validate the API key. Click Save . Specify the model ID of the model that you want to use: Click Model Settings . Under Model ID , click Add Model ID . A screen to enter the Model Id is displayed. Enter the Model ID that you obtained in the procedure as the default model for your organization. Optional: Click Test model ID to validate the model ID. Click Save . When the API key and model ID is successfully validated, Red Hat Ansible Lightspeed is connected to your IBM watsonx Code Assistant instance. Additional resources Troubleshooting Red Hat Ansible Lightspeed configuration errors 3.3. Setting up Red Hat Ansible Lightspeed on-premise deployment As an Red Hat Ansible Automation Platform administrator, you can set up a Red Hat Ansible Lightspeed on-premise deployment and connect it to an IBM watsonx Code Assistant instance. After the on-premise deployment is successful, you can start using the Ansible Lightspeed service with the Ansible Visual Studio (VS) Code extension. The following capability is not yet supported on Red Hat Ansible Lightspeed on-premise deployments: Viewing telemetry data on the Admin dashboard Note Red Hat Ansible Lightspeed on-premise deployments are supported on Red Hat Ansible Automation Platform version 2.4 and later. 3.3.1. Overview This section provides information about the system requirements, prerequisites, and the process for setting up a Red Hat Ansible Lightspeed on-premise deployment. 3.3.1.1. System requirements Your system must meet the following minimum system requirements to install and run the Red Hat Ansible Lightspeed on-premise deployment. Requirement Minimum requirement RAM 5 GB CPU 1 Local disk 40 GB To see the rest of the Red Hat Ansible Automation Platform system requirements, see the System requirements section of Planning your installation . Note You must also have installed IBM watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data. The installation includes a base model that you can use to set up your Red Hat Ansible Lightspeed on-premise deployment. For installation information, see the watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data documentation . 3.3.1.2. Prerequisites You have installed Red Hat Ansible Automation Platform on your Red Hat OpenShift Container Platform environment. You have administrator privileges for Red Hat Ansible Automation Platform. You have installed IBM watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data. Your system meets the minimum system requirements to set up Red Hat Ansible Lightspeed on-premise deployment. You have obtained an API key and a model ID from IBM watsonx Code Assistant. For information about obtaining an API key and model ID from IBM watsonx Code Assistant, see the IBM watsonx Code Assistant documentation . For information about installing IBM watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data, see the watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data documentation . You have an existing external PostgreSQL database configured for the Red Hat Ansible Automation Platform, or have a database created for you when configuring the Red Hat Ansible Lightspeed on-premise deployment. 3.3.1.3. Process for configuring a Red Hat Ansible Lightspeed on-premise deployment Perform the following tasks to install and configure a Red Hat Ansible Lightspeed on-premise deployment: Install the Red Hat Ansible Automation Platform operator Create an OAuth application Create connections secrets for Red Hat Ansible Automation Platform and IBM watsonx Code Assistant both Create and deploy a Red Hat Ansible Lightspeed instance Update the Redirect URI to connect to your Red Hat Ansible Lightspeed on-premise deployment Install and configure Ansible Visual Studio Code extension to connect to Red Hat Ansible Lightspeed on-premise deployment Optional: If you want to connect to a different IBM watsonx Code Assistant, update the connection secrets on an existing Red Hat Ansible Automation Platform operator Optional: Monitor your Red Hat Ansible Lightspeed on-premise deployment Optional: Use the Ansible Lightspeed REST API to build custom automation development and tooling workflow outside of VS Code 3.3.2. Installing the Red Hat Ansible Automation Platform operator Use this procedure to install the Ansible Automation Platform operator on the Red Hat OpenShift Container Platform. Prerequisites You have installed and configured automation controller. Procedure Log in to the Red Hat OpenShift Container Platform as an administrator. Create a namespace: Go to Administration Namespaces . Click Create Namespace . Enter a unique name for the namespace. Click Create . Install the operator: Go to Operators OperatorHub . Select the namespace where you want to install the Red Hat Ansible Automation Platform operator. Search for the Ansible Automation Platform operator. From the search results, select the Ansible Automation Platform (provided by Red Hat) tile. Select an Update Channel . You can select either stable-2.x or stable-2.x-cluster-scoped as the channel. Select the destination namespace if you selected "stable-2.x" as the update channel. Select Install . It takes a few minutes for the operator to be installed. Click View Operator to see the details of your newly installed Red Hat Ansible Automation Platform operator. 3.3.3. Creating an OAuth application Use this procedure to create an OAuth application for your Red Hat Ansible Lightspeed on-premise deployment. Prerequisites You have an operational Ansible automation controller instance. Procedure Log in to the automation controller as an administrator. Under Administration , click Applications Add . Enter the following information: Name : Specify a unique name for your application. Organization : Select a preferred organization. Authorization grant type : Select Authorization code . Redirect URIs : Enter a temporary URL for now, for example, https://temp/ . The exact Red Hat Ansible Lightspeed application URL is generated after the on-premise deployment is completed. After the deployment is completed, you must change the Redirect URI to point to the generated Red Hat Ansible Lightspeed application URL. For more information, see Updating the Redirect URIs . From the Client type list, select Confidential . Click Save . A pop-up window is displayed along with the generated application client ID and client secret. Copy and save both the generated client ID and client secret for future use. Important This is the only time the pop-up window is displayed. Therefore, ensure that you copy both the client ID and client secret, as you need these tokens to create connections secrets for Red Hat Ansible Automation Platform and IBM watsonx Code Assistant both. The following image is an example of the generated client ID and client secret: Additional resources Troubleshooting Red Hat Ansible Lightspeed on-premise deployment errors 3.3.4. Creating connection secrets You must create an authorization secret to connect to Red Hat Ansible Automation Platform, and a model secret to connect to IBM watsonx Code Assistant. If you need to trust a custom Certificate Authority, you must create a bundle secret. Prerequisites You have installed the Ansible Automation Platform operator on the Red Hat OpenShift Container Platform. You have created an OAuth application in the automation controller. You have obtained an API key and a model ID from IBM watsonx Code Assistant. For information about obtaining an API key and model ID from IBM watsonx Code Assistant, see the IBM watsonx Code Assistant documentation . For information about installing IBM watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data, see the watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data documentation . 3.3.4.1. Creating authorization and model secrets Use this procedure to create secrets to connect to both Red Hat Ansible Automation Platform and IBM watsonx Code Assistant. Procedure Go to the Red Hat OpenShift Container Platform. Select Workloads Secrets . Click Create Key/value secret . From the Projects list, select the namespace that you created when you installed the Red Hat Ansible Automation Platform operator. Create an authorization secret to connect to the Red Hat Ansible Automation Platform: Click Create Key/value secret . In Secret name , enter a unique name for the secret. For example, auth-aiconnect . Add the following keys and their associated values individually: Key Value auth_api_url Enter the API URL of the automation controller in the following format based on the Ansible Automation Platform version you are using: For Ansible Automation Platform 2.4: https://<CONTROLLER_SERVER_NAME>/api For Ansible Automation Platform 2.5: https://<GATEWAY_SERVER_NAME> auth_api_key Enter the client ID of the OAuth application that you recorded earlier. auth_api_secret Enter the client secret of the OAuth application that you recorded earlier. auth_allowed_hosts Enter the list of strings representing the host/domain names used by the underlying Django framework to restrict which hosts can access the service. This is a security measure to prevent HTTP Host header attacks. For more information, see Allowed hosts in django documentation . auth_verify_ssl Enter the value as true . Important Ensure that you do not accidentally add any whitespace characters (extra line, space, and so on) to the value fields. If there are any extra or erroneous characters in the secret, the connection to Red Hat Ansible Automation Platform fails. Click Create . The following image is an example of an authorization secret: Similarly, create a model secret to connect to an IBM watsonx Code Assistant model: Click Create Key/value secret . In Secret name , enter a unique name for the secret. For example, model-aiconnect . Add the following keys and their associated values individually: Key Value username Enter the username you use to connect to an IBM Cloud Pak for Data deployment. model_type Enter wca-onprem to connect to an IBM Cloud Pak for Data deployment. model_url Enter the URL to IBM watsonx Code Assistant. model_api_key Enter the API key of your IBM watsonx Code Assistant model in your IBM Cloud Pak for Data deployment. model_id Enter the model ID of your IBM watsonx Code Assistant model in your IBM Cloud Pak for Data deployment. Important Ensure that you do not accidentally add any whitespace characters (extra line, space, and so on) to the value fields. If there are any extra or erroneous characters in the secret, the connection to IBM watsonx Code Assistant fails. Click Create . After you create the authorization and model secrets, you must select the secrets when you create and deploy a Red Hat Ansible Lightspeed instance . 3.3.4.2. Creating a bundle secret to trust a custom Certificate Authority If you encounter a scenario where you need to trust a custom Certificate Authority, you can customize a few variables for the Red Hat Ansible Lightspeed instance. Trusting a custom Certificate Authority enables the Red Hat Ansible Lightspeed instance to access network services configured with SSL certificates issued locally, such as cloning a project from an internal Git server via HTTPS. If you encounter the following error when syncing projects, it indicates that you need to customize the variables. fatal: unable to access 'https://private.repo./mine/ansible-rulebook.git': SSL certificate problem: unable to get local issuer certificate Procedure Use one of the following methods to create a custom bundle secret using the CLI: Using a Certificate Authority secret Create a bundle_cacert_secret using the following command: # kubectl create secret generic <resourcename>-custom-certs \ --from-file=bundle-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> 1 Where: <1>: Path of the self-signed certificate. Modify the file path to point to where your self-signed certificates are stored. The Red Hat Ansible Lightspeed instance looks for the data field bundle-ca.crt in the specified bundle_cacert_secret secret. The following is an example of a bundle CA certificate: spec: ... bundle_cacert_secret: <resourcename>-custom-certs Using the kustomization.yaml configuration file Use the following command: secretGenerator: - name: <resourcename>-custom-certs files: - bundle-ca.crt=<path+filename> options: disableNameSuffixHash: true After you create the bundle secret, you must select the secret when you create and deploy a Red Hat Ansible Lightspeed instance . Additional resources Troubleshooting Red Hat Ansible Lightspeed on-premise deployment errors 3.3.5. Creating and deploying a Red Hat Ansible Lightspeed instance Use this procedure to create and deploy a Red Hat Ansible Lightspeed instance to your namespace. Prerequisites You have created connection secrets for both Red Hat Ansible Automation Platform and IBM watsonx Code Assistant. Procedure Go to Red Hat OpenShift Container Platform. Select Operators Installed Operators . From the Projects list, select the namespace where you want to install the Red Hat Ansible Lightspeed instance and where you created the connection secrets. Locate and select the Ansible Automation Platform (provided by Red Hat) operator that you installed earlier. Select All instances Create new . From the Create new list, select the Ansible Lightspeed modal. Ensure that you have selected Form view as the configuration mode. Provide the following information: Name : Enter a unique name for your Red Hat Ansible Lightspeed instance. Secret where the authentication information can be found : Select the authentication secret that you created to connect to the Red Hat Ansible Automation Platform. Secret where the model configuration can be found : Select the model secret that you created to connect to IBM watsonx Code Assistant. Optional: If you created a bundle secret to trust a custom Certificate Authority, select the bundle_cacert_secret from Advanced Bundle CA Certificate Secret list. Click Create . The Red Hat Ansible Lightspeed instance takes a few minutes to deploy to your namespace. After the installation status is displayed as Successful , the Ansible Lightspeed portal URL is available under Networking Routes in Red Hat OpenShift Container Platform. Additional resources Troubleshooting Red Hat Ansible Lightspeed on-premise deployment errors 3.3.6. Updating the Redirect URIs When Ansible users log in or log out of Ansible Lightspeed service, the Red Hat Ansible Automation Platform redirects the user's browser to a specified URL. You must configure the redirect URLs so that users can log in and log out of the application successfully. Prerequisites You have created and deployed a Red Hat Ansible Lightspeed instance to your namespace. Procedure Get the URL of the Ansible Lightspeed instance: Go to Red Hat OpenShift Container Platform. Select Networking Routes . Locate the Red Hat Ansible Lightspeed instance that you created. Copy the Location URL of the Red Hat Ansible Lightspeed instance. Update the login redirect URI: In the automation controller, go to Administration Applications . Select the Lightspeed Oauth application that you created. In the Redirect URIs field of the Oauth application, enter the URL in the following format: https://<lightspeed_route>/complete/aap/ An example of the URL is https://lightspeed-on-prem-web-service.com/complete/aap/ . Important The Redirect URL must include the following information: The prefix https:// The <lightspeed_route> URL, which is the URL of the Red Hat Ansible Lightspeed instance that you copied earlier The suffix /complete/aap/ that includes a backslash sign ( / ) at the end Click Save . Update the logout redirect URI based on your setup: Procedure when both the Red Hat Ansible Lightspeed instance and automation controller are installed using Ansible Automation Platform operator : Log in to the cluster on which the Ansible Automation Platform instance is running. Identify the AutomationController custom resource. Select [YAML view] . Add the following entry to the YAML file: ```yaml spec: ... extra_settings: - setting: LOGOUT_ALLOWED_HOSTS value: "'<lightspeed_route-HostName>'" ``` Important Ensure the following while specifying the value: parameter: Specify the hostname without the network protocol, such as https:// . For example, the correct hostname would be my-aiconnect-instance.somewhere.com , and not https://my-aiconnect-instance.somewhere.com . Use the single and double quotes exactly as specified in the codeblock. If you change the single quotes to double quotes and vice versa, you will encounter errors when logging out. Use a comma to specify multiple instances of Red Hat Ansible Lightspeed deployment. If you are running multiple instances of Red Hat Ansible Lightspeed application with a single Red Hat Ansible Automation Platform deployment, add a comma (,) and then add the other hostname values. For example, you can add multiple hostnames, such as "'my-lightspeed-instance1.somewhere.com','my-lightspeed-instance2.somewhere.com'" Apply the revised YAML. This task restarts the automation controller pods. Procedure when the Ansible Automation Platform operator is run on a virtual machine : Log in to the virtual machine with the Ansible Automation Platform controller. In the /etc/tower/conf.d/custom.py file, add the following parameter: LOGOUT_ALLOWED_HOSTS = "<lightspeed_route-HostName>" Important If the /etc/tower/conf.d/custom.py file does not exist, create it. Specify the hostname without the network protocol, such as https:// . For example, the correct hostname would be my-aiconnect-instance.somewhere.com , and not https://my-aiconnect-instance.somewhere.com . Use the single and double quotes in pairs; do not mix single quotes with double quotes. Restart the automation controller by executing the following command: USD automation-controller-service restart Procedure when the platform gateway is run on a virtual machine : Log in to the virtual machine with the platform gateway. Add the following parameter in either the gateway/settings.py file or in the ansible-automation-platform/gateway/settings.py file: LOGOUT_ALLOWED_HOSTS = "<lightspeed_route-HostName>" Important Specify the hostname without the network protocol, such as https:// . For example, the correct hostname would be my-aiconnect-instance.somewhere.com , and not https://my-aiconnect-instance.somewhere.com . Use double quotes exactly as specified in the codeblock. Additional resources Troubleshooting Red Hat Ansible Lightspeed on-premise deployment errors 3.3.7. Configuring Ansible VS Code extension for Red Hat Ansible Lightspeed on-premise deployment To access the on-premise deployment of Red Hat Ansible Lightspeed, all Ansible users within your organization must install the Ansible Visual Studio (VS) Code extension in their VS Code editor, and configure the extension to connect to the on-premise deployment. Prerequisites You have installed VS Code version 1.70.1 or later. Procedure Open the VS Code application. From the Activity bar, click the Extensions icon. From the Installed Extensions list, select Ansible . From the Ansible extension page, click the Settings icon ( ) and select Extension Settings . Select Ansible Lightspeed settings and specify the following information: In the URL for Ansible Lightspeed field, enter the Route URL of the Red Hat Ansible Lightspeed on-premise deployment. Ansible users must have Ansible Automation Platform controller credentials. Optional: If you want to use a custom model instead of the default model, in the Model ID Override field, enter the custom model ID. Your settings are automatically saved in VS Code. After configuring Ansible VS Code extension to connect to Red Hat Ansible Lightspeed on-premise deployment, you must log in to Ansible Lightspeed through the Ansible VS Code extension . Note If your organization recently subscribed to the Red Hat Ansible Automation Platform, it might take a few hours for Red Hat Ansible Lightspeed to detect the new subscription. In VS Code, use the Refresh button in the Ansible extension from the Activity bar to check again. Additional resources Troubleshooting Red Hat Ansible Lightspeed on-premise deployment errors 3.3.8. Updating connection secrets on an existing Red Hat Ansible Automation Platform operator After you have set up the Red Hat Ansible Lightspeed on-premise deployment successfully, you can modify the deployment if you want to connect to another IBM watsonx Code Assistant model. For example, you connected to the default IBM watsonx Code Assistant model but now want to connect to a custom model instead. To connect to another IBM watsonx Code Assistant model, you must create new connection secrets, and then update the connection secrets and parameters on an existing Red Hat Ansible Automation Platform operator. Prerequisites You have set up a Red Hat Ansible Lightspeed on-premise deployment. You have obtained an API key and a model ID of the IBM watsonx Code Assistant model you want to connect to. You have created new authorization and model connection secrets for the IBM watsonx Code Assistant model that you want to connect to. For information about creating authorization and model connection secrets, see Creating connection secrets . Procedure Go to the Red Hat OpenShift Container Platform. Select Operators Installed Operators . From the Projects list, select the namespace where you installed the Red Hat Ansible Lightspeed instance. Locate and select the Ansible Automation Platform (provided by Red Hat) operator that you installed earlier. Select the Ansible Lightspeed tab. Find and select the instance you want to update. Click Actions Edit AnsibleLightspeed . The editor switches to a text or YAML view. Scroll the text to find the spec: section. ) Find the entry for the secret you have changed and saved under a new name. Change the name of the secrets to the new secrets. Click Save . The new AnsibleLightspeed Pods are created. After the new pods are running successfully, the old AnsibleLightspeed Pods are terminated. Additional resources Troubleshooting Red Hat Ansible Lightspeed on-premise deployment errors 3.3.9. Monitoring your Red Hat Ansible Lightspeed on-premise deployment After the Red Hat Ansible Lightspeed on-premise deployment is successful, use the following procedure to monitor the metrics on an API endpoint /metrics . Procedure Create a system auditor user: Create a user with a system auditor role in the Red Hat Ansible Automation Platform. For the procedure, see the Creating a user section of Getting started with Ansible Automation Platform . Verify that you can log in to the Ansible Lightspeed portal for on-premise deployment ( https://<lightspeed_route>/ ) as the newly-created system auditor user, and then log out. Create a token for the system auditor user: Log in to the Ansible Lightspeed portal for on-premise deployment ( https://<lightspeed_route>/admin ) as an administrator by using the following credentials: Username: admin Password: The secret that is named as <lightspeed-custom-resource-name>-admin-password in the Red Hat OpenShift Container Platform cluster namespace where Red Hat Ansible Lightspeed is deployed. On the Django administration window, select Users from the Users area. A list of users is displayed. Verify that the user with the system auditor role is listed in the Users list. From the Django Oauth toolkit area, select Access tokens Add . Provide the following information and click Save : User : Use the magnifying glass icon to search and select the user with the system auditor role. Token : Specify a token for the user. Copy this token for later use. Id token : Select the token ID. Application : Select Ansible Lightspeed for VS Code . Expires : Select the date and time when you want the token to expire. Scope : Specify the scope as read write . An access token is created for the user with a system auditor role. Log out from the Ansible Lightspeed portal for on-premise deployment. Monitor your Red Hat Ansible Lightspeed on-premise deployment, by using the authorization token of the user with the system auditor role, to access the metrics endpoint https://<lightspeed_route>/metrics . 3.3.10. Using the Ansible Lightspeed REST API As the platform administrator, you can configure and use the Ansible Lightspeed REST API to build a custom automation development and tooling workflow outside of VS Code. For information about the Ansible Lightspeed REST API, see Ansible AI Connect. 1.0.0 (v1) in the API catalog. Note The Ansible Lightspeed REST API is available for Ansible Automation Platform 2.5 and later. Prerequisite Ensure that you are using the Red Hat Ansible Automation Platform operator patch version 2.5-20250305.9 and Red Hat Ansible Lightspeed operator version 2.5.250225. Procedure Select the platform user for whom you want to grant REST API access. You can select an existing user or create a platform user in the Red Hat Ansible Automation Platform. For the procedure, see the Creating a user section of Getting started with Ansible Automation Platform . Verify that you can log in to the Ansible Lightspeed portal for on-premise deployment ( https://<lightspeed_route>/ ) as the platform user you selected or created, and then log out. Create a token for the platform user: Log in to the Ansible Lightspeed portal for on-premise deployment ( https://<lightspeed_route>/admin ) as an administrator by using the following credentials: Username: admin Password: The secret that is named as <lightspeed-custom-resource-name>-admin-password in the Red Hat OpenShift Container Platform cluster namespace where Red Hat Ansible Lightspeed is deployed. On the Django administration window, select Users from the Users area. A list of users is displayed. Verify that the platform user is listed in the Users list. From the Django Oauth toolkit area, select Access tokens Add . Provide the following information and click Save : User : Use the magnifying glass icon to search and select the newly-created or existing user for whom you want to grant API access. Token : Specify a token for the user. Copy this token for later use. Id token : Select the token ID. Application : Select Ansible Lightspeed for VS Code . Expires : Select the date and time when you want the token to expire. Scope : Specify the scope as read write . An access token is created for the user. Log out from the Ansible Lightspeed portal for on-premise deployment. Make a direct call to the Ansible Lightspeed REST API by specifying the newly-created token in the authorization header: curl -H "Authorization: Bearer <token>" https://<lightspeed_route>/api/v1/me/
[ "fatal: unable to access 'https://private.repo./mine/ansible-rulebook.git': SSL certificate problem: unable to get local issuer certificate", "kubectl create secret generic <resourcename>-custom-certs --from-file=bundle-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> 1", "spec: bundle_cacert_secret: <resourcename>-custom-certs", "secretGenerator: - name: <resourcename>-custom-certs files: - bundle-ca.crt=<path+filename> options: disableNameSuffixHash: true", "```yaml spec: extra_settings: - setting: LOGOUT_ALLOWED_HOSTS value: \"'<lightspeed_route-HostName>'\" ```", "curl -H \"Authorization: Bearer <token>\" https://<lightspeed_route>/api/v1/me/" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/set-up-lightspeed_lightspeed-user-guide
Appendix A. Versioning information
Appendix A. Versioning information Documentation last updated on Thursday, March 14th, 2024.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/versioning-information
Chapter 1. Key features
Chapter 1. Key features Streams for Apache Kafka simplifies the process of running Apache Kafka in an OpenShift cluster. This guide is intended as a starting point for building an understanding of Streams for Apache Kafka. The guide introduces some of the key concepts behind Kafka, which is central to Streams for Apache Kafka, explaining briefly the purpose of Kafka components. Configuration points are outlined, including options to secure and monitor Kafka. A distribution of Streams for Apache Kafka provides the files to deploy and manage a Kafka cluster, as well as example files for configuration and monitoring of your deployment . A typical Kafka deployment is described, as well as the tools used to deploy and manage Kafka. 1.1. Kafka capabilities The underlying data stream-processing capabilities and component architecture of Kafka can deliver: Microservices and other applications to share data with extremely high throughput and low latency Message ordering guarantees Message rewind/replay from data storage to reconstruct an application state Message compaction to remove old records when using a key-value log Horizontal scalability in a cluster configuration Replication of data to control fault tolerance Retention of high volumes of data for immediate access 1.2. Kafka use cases Kafka's capabilities make it suitable for: Event-driven architectures Event sourcing to capture changes to the state of an application as a log of events Message brokering Website activity tracking Operational monitoring through metrics Log collection and aggregation Commit logs for distributed systems Stream processing so that applications can respond to data in real time 1.3. How Streams for Apache Kafka supports Kafka Streams for Apache Kafka provides container images and operators for running Kafka on OpenShift. Streams for Apache Kafka operators are purpose-built with specialist operational knowledge to effectively manage Kafka on OpenShift. Operators simplify the process of: Deploying and running Kafka clusters Deploying and running Kafka components Configuring access to Kafka Securing access to Kafka Upgrading Kafka Managing brokers Creating and managing topics Creating and managing users
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_on_openshift_overview/key-features_str
Chapter 3. Installing a cluster quickly on AWS
Chapter 3. Installing a cluster quickly on AWS In OpenShift Container Platform version 4.15, you can install a cluster on Amazon Web Services (AWS) that uses the default configuration options. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 3.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.8. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.9. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.10. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_aws/installing-aws-default
4.364. ypserv
4.364. ypserv 4.364.1. RHBA-2011:1557 - ypserv bug fix and enhancement update An updated ypserv package that fixes two bugs and adds one enhancement is now available for Red Hat Enterprise Linux 6. The ypserv utility provides network information (login names, passwords, home directories, group information) to all of the machines on a network. It can enable users to log in on any machine on the network as long as the machine has the Network Information Service (NIS) client programs running and the user's password is recorded in the NIS passwd database. NIS was formerly known as Sun Yellow Pages (YP). Bug Fixes BZ# 699675 Uninitialized memory could be accessed when running the "yppasswdd" command with the "-x" option to pass data to an external program. As a consequence, redundant characters were prefixed to the request string, which led to incorrect parsing of the request. This update fixes the memory initialization and the request can be now parsed successfully. BZ# 734494 Prior to this update, the root user could see old passwords of other users. This was caused by a change request being logged using syslog when running the "yppaswdd" command with the "-x" option to pass data to an external program. With this update, the old password hash and the new password hash are cleared in syslog and the root user can no longer view the old passwords of other users. Enhancement BZ# 699667 Prior to this update, users were not able to change their passwords by running the "yppasswd" command if using the passwd.adjunct file, which prevents disclosing the encrypted passwords. With this update, users are now able to change their passwords. All users of ypserv are advised to upgrade to this updated package, which fixes these bugs and adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/ypserv
6.2. Known Issues
6.2. Known Issues Supplying an invalid version number in cluster.conf as a parameter to the cman_tool command will cause the cluster to stop processing information. To work around this issue, ensure that the version number used is valid. Under some circumstances, creating cluster mirrors with the '--nosync' option may cause I/O to become extremely slow. Note that this issue only effects I/O immediately after the creation of the mirror, and only when '--nosync' is used. To work around this issue, run the following command after the creating the mirror. luci will not function with Red Hat Enterprise Linux 5 clusters unless each cluster node has ricci version 0.12.2-14 The sync state of an inactive LVM mirror cannot be determined. Consequently, the primary device of an LVM mirror can only be removed when the mirror is in-sync. If device-mapper-multipath is used, and the default path failure timeout value ( /sys/class/fc_remote_ports/rport-xxx/dev_loss_tmo ) is changed, that the timeout value will revert to the default value after a path fails, and later restored. Note that this issue will present the lpfc, qla2xxx, ibmfc or fnic Fibre Channel drivers. To work around this issue the dev_loss_tmo value must be adjusted after each path fail/restore event. Generally, placing mirror legs on different physical devices improves data availability. The command lvcreate --alloc anywhere does not guarantee placement of data on different physical devices. Consequently, the use of this option is not recommended. If this option is used, the location of the data placement must be manually verified. The GFS2 fsck program, fsck.gfs2, currently assumes that the gfs2 file system is divided into evenly-spaced segments known as resource groups. This is always the case on file systems formatted by mkfs.gfs2. It will also be the case for most file systems created as GFS (gfs1) and converted to gfs2 format with gfs2_convert. However, if a GFS file system was resized (with gfs_grow) while it was in the GFS format, the resource groups might not be evenly spaced. If the resource groups are not evenly spaced, and the resource groups or the resource groups index (rindex) become damaged, fsck.gfs2 might not function correctly. There is currently no workaround for this issue. However, if the resource groups are not damaged, avoid this issue by copying the file system contents to a new device with evenly-spaced resource groups. Format the new device as gfs2 with mkfs.gfs2, and copy the contents from the old device to the new device. The new device will have evenly-spaced resource groups.
[ "lvchange --refresh <VG>/<LV>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/ar01s06s02
Chapter 140. Hazelcast SEDA Component
Chapter 140. Hazelcast SEDA Component Available as of Camel version 2.7 The Hazelcast SEDA component is one of Camel Hazelcast Components which allows you to access Hazelcast BlockingQueue. SEDA component differs from the rest components provided. It implements a work-queue in order to support asynchronous SEDA architectures, similar to the core "SEDA" component. 140.1. Options The Hazelcast SEDA component supports 3 options, which are listed below. Name Description Default Type hazelcastInstance (advanced) The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. HazelcastInstance hazelcastMode (advanced) The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Hazelcast SEDA endpoint is configured using URI syntax: with the following path and query parameters: 140.1.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 140.1.2. Query Parameters (16 parameters): Name Description Default Type defaultOperation (common) To specify a default operation to use, if no operation header has been provided. HazelcastOperation hazelcastInstance (common) The hazelcast instance reference which can be used for hazelcast endpoint. HazelcastInstance hazelcastInstanceName (common) The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. String reliable (common) Define if the endpoint will use a reliable Topic struct or not. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean pollingTimeout (consumer) Define the polling timeout of the Queue consumer in Poll mode 10000 long poolSize (consumer) Define the Pool size for Queue Consumer Executor 1 int queueConsumerMode (consumer) Define the Queue Consumer mode: Listen or Poll Listen HazelcastQueueConsumer Mode exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean concurrentConsumers (seda) To use concurrent consumers polling from the SEDA queue. 1 int onErrorDelay (seda) Milliseconds before consumer continues polling after an error has occurred. 1000 int pollTimeout (seda) The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int transacted (seda) If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete. false boolean transferExchange (seda) If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped. false boolean 140.2. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.hazelcast-seda.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-seda.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-seda.enabled Enable hazelcast-seda component true Boolean camel.component.hazelcast-seda.hazelcast-instance The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. The option is a com.hazelcast.core.HazelcastInstance type. String camel.component.hazelcast-seda.hazelcast-mode The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String camel.component.hazelcast-seda.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 140.3. SEDA producer - to("hazelcast-seda:foo") The SEDA producer provides no operations. You only send data to the specified queue. Java DSL : from("direct:foo") .to("hazelcast-seda:foo"); Spring DSL : <route> <from uri="direct:start" /> <to uri="hazelcast-seda:foo" /> </route> 140.4. SEDA consumer - from("hazelcast-seda:foo") The SEDA consumer provides no operations. You only retrieve data from the specified queue. Java DSL : from("hazelcast-seda:foo") .to("mock:result"); Spring DSL: <route> <from uri="hazelcast-seda:foo" /> <to uri="mock:result" /> </route>
[ "hazelcast-seda:cacheName", "from(\"direct:foo\") .to(\"hazelcast-seda:foo\");", "<route> <from uri=\"direct:start\" /> <to uri=\"hazelcast-seda:foo\" /> </route>", "from(\"hazelcast-seda:foo\") .to(\"mock:result\");", "<route> <from uri=\"hazelcast-seda:foo\" /> <to uri=\"mock:result\" /> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast-seda-component
Chapter 6. Using the PHP scripting language
Chapter 6. Using the PHP scripting language Hypertext Preprocessor (PHP) is a general-purpose scripting language mainly used for server-side scripting. You can use PHP to run the PHP code by using a web server. 6.1. Installing the PHP scripting language In RHEL 9, PHP is available in the following versions and formats: PHP 8.0 as the php RPM package PHP 8.1 as the php:8.1 module stream PHP 8.2 as the php:8.2 module stream Procedure Depending on your scenario, complete one of the following steps: To install PHP 8.0, enter: To install the php:8.1 or php:8.2 module stream with the default profile, enter for example: The default common profile installs also the php-fpm package, and preconfigures PHP for use with the Apache HTTP Server or nginx. To install a specific profile of the php:8.1 or php:8.2 module stream, use for example: Available profiles are as follows: common - The default profile for server-side scripting using a web server. It includes the most widely used extensions. minimal - This profile installs only the command-line interface for scripting with PHP without using a web server. devel - This profile includes packages from the common profile and additional packages for development purposes. For example, to install PHP 8.1 for use without a web server, use: Additional resources Managing software with the DNF tool 6.2. Using the PHP scripting language with a web server 6.2.1. Using PHP with the Apache HTTP Server In Red Hat Enterprise Linux 9, the Apache HTTP Server enables you to run PHP as a FastCGI process server. FastCGI Process Manager (FPM) is an alternative PHP FastCGI daemon that allows a website to manage high loads. PHP uses FastCGI Process Manager by default in RHEL 9. You can run the PHP code using the FastCGI process server. Prerequisites The PHP scripting language is installed on your system. Procedure Install the httpd package: Start the Apache HTTP Server : Or, if the Apache HTTP Server is already running on your system, restart the httpd service after installing PHP: Start the php-fpm service: Optional: Enable both services to start at boot time: To obtain information about your PHP settings, create the index.php file with the following content in the /var/www/html/ directory: To run the index.php file, point the browser to: Optional: Adjust configuration if you have specific requirements: /etc/httpd/conf/httpd.conf - generic httpd configuration /etc/httpd/conf.d/php.conf - PHP-specific configuration for httpd /usr/lib/systemd/system/httpd.service.d/php-fpm.conf - by default, the php-fpm service is started with httpd /etc/php-fpm.conf - FPM main configuration /etc/php-fpm.d/www.conf - default www pool configuration Example 6.1. Running a "Hello, World!" PHP script using the Apache HTTP Server Create a hello directory for your project in the /var/www/html/ directory: Create a hello.php file in the /var/www/html/hello/ directory with the following content: Start the Apache HTTP Server : To run the hello.php file, point the browser to: As a result, a web page with the "Hello, World!" text is displayed. Additional resources Setting up the Apache HTTP web server 6.2.2. Using PHP with the nginx web server You can run PHP code through the nginx web server. Prerequisites The PHP scripting language is installed on your system. Procedure Install the nginx package: Start the nginx server: Or, if the nginx server is already running on your system, restart the nginx service after installing PHP: Start the php-fpm service: Optional: Enable both services to start at boot time: To obtain information about your PHP settings, create the index.php file with the following content in the /usr/share/nginx/html/ directory: To run the index.php file, point the browser to: Optional: Adjust configuration if you have specific requirements: /etc/nginx/nginx.conf - nginx main configuration /etc/nginx/conf.d/php-fpm.conf - FPM configuration for nginx /etc/php-fpm.conf - FPM main configuration /etc/php-fpm.d/www.conf - default www pool configuration Example 6.2. Running a "Hello, World!" PHP script using the nginx server Create a hello directory for your project in the /usr/share/nginx/html/ directory: Create a hello.php file in the /usr/share/nginx/html/hello/ directory with the following content: Start the nginx server: To run the hello.php file, point the browser to: As a result, a web page with the "Hello, World!" text is displayed. Additional resources Setting up and configuring NGINX 6.3. Running a PHP script using the command-line interface A PHP script is usually run using a web server, but also can be run using the command-line interface. Prerequisites The PHP scripting language is installed on your system. Procedure In a text editor, create a filename .php file Replace filename with the name of your file. Execute the created filename .php file from the command line: Example 6.3. Running a "Hello, World!" PHP script using the command-line interface Create a hello.php file with the following content using a text editor: Execute the hello.php file from the command line: As a result, "Hello, World!" is printed. 6.4. Additional resources httpd(8) - The manual page for the httpd service containing the complete list of its command-line options. httpd.conf(5) - The manual page for httpd configuration, describing the structure and location of the httpd configuration files. nginx(8) - The manual page for the nginx web server containing the complete list of its command-line options and list of signals. php-fpm(8) - The manual page for PHP FPM describing the complete list of its command-line options and configuration files.
[ "dnf install php", "dnf module install php:8.1", "dnf module install php:8.1/ profile", "dnf module install php:8.1/minimal", "dnf install httpd", "systemctl start httpd", "systemctl restart httpd", "systemctl start php-fpm", "systemctl enable php-fpm httpd", "echo '<?php phpinfo(); ?>' > /var/www/html/index.php", "http://<hostname>/", "mkdir hello", "<!DOCTYPE html> <html> <head> <title>Hello, World! Page</title> </head> <body> <?php echo 'Hello, World!'; ?> </body> </html>", "systemctl start httpd", "http://<hostname>/hello/hello.php", "dnf install nginx", "systemctl start nginx", "systemctl restart nginx", "systemctl start php-fpm", "systemctl enable php-fpm nginx", "echo '<?php phpinfo(); ?>' > /usr/share/nginx/html/index.php", "http://<hostname>/", "mkdir hello", "<!DOCTYPE html> <html> <head> <title>Hello, World! Page</title> </head> <body> <?php echo 'Hello, World!'; ?> </body> </html>", "systemctl start nginx", "http://<hostname>/hello/hello.php", "php filename .php", "<?php echo 'Hello, World!'; ?>", "php hello.php" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_and_using_dynamic_programming_languages/assembly_using-the-php-scripting-language_installing-and-using-dynamic-programming-languages
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/service_registry_user_guide/pr01
Chapter 6. Managing Red Hat Cluster With system-config-cluster
Chapter 6. Managing Red Hat Cluster With system-config-cluster This chapter describes various administrative tasks for managing a Red Hat Cluster and consists of the following sections: Section 6.1, "Starting and Stopping the Cluster Software" Section 6.2, "Managing High-Availability Services" Section 6.4, "Backing Up and Restoring the Cluster Database" Section 6.5, "Disabling the Cluster Software" Section 6.6, "Diagnosing and Correcting Problems in a Cluster" 6.1. Starting and Stopping the Cluster Software To start the cluster software on a member, type the following commands in this order: service ccsd start service cman start (or service lock_gulmd start for GULM clusters) service fenced start (DLM clusters only) service clvmd start , if CLVM has been used to create clustered volumes service gfs start , if you are using Red Hat GFS service rgmanager start , if the cluster is running high-availability services ( rgmanager ) To stop the cluster software on a member, type the following commands in this order: service rgmanager stop , if the cluster is running high-availability services ( rgmanager ) service gfs stop , if you are using Red Hat GFS service clvmd stop , if CLVM has been used to create clustered volumes service fenced stop (DLM clusters only) service cman stop (or service lock_gulmd stop for GULM clusters) service ccsd stop Stopping the cluster services on a member causes its services to fail over to an active member.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/ch-mgmt-scc-CA
Support
Support OpenShift Container Platform 4.17 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc api-resources -o name | grep config.openshift.io", "oc explain <resource_name>.config.openshift.io", "oc get <resource_name>.config -o yaml", "oc edit <resource_name>.config -o yaml", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:openshift_network_operator_ipsec_state:info\"}'", "INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)", "oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data", "oc extract secret/pull-secret -n openshift-config --to=.", "\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret", "cp pull-secret pull-secret-backup", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret", "apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false", "apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: true", "apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: false", "oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running", "oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1", "{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }", "apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled", "oc apply -f <your_datagather_definition>.yaml", "apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled", "apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: 1 gatherConfig: disabledGatherers: - all 2", "spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info", "apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: gatherConfig: 1 disabledGatherers: all", "spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info", "apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names", "apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]", "oc get -n openshift-insights deployment insights-operator -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1", "apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:", "oc apply -n openshift-insights -f gather-job.yaml", "oc describe -n openshift-insights job/insights-operator-job", "Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>", "oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator", "I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms", "oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data", "oc delete -n openshift-insights job insights-operator-job", "oc extract secret/pull-secret -n openshift-config --to=.", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }", "curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload", "* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%", "apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: interval: 2h", "apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: true", "apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: false", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5", "oc import-image is/must-gather -n openshift", "oc adm must-gather", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 2", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather -- gather_network_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1", "Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting", "oc adm must-gather --volume-percentage <storage_percentage>", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "oc get nodes", "oc debug node/my-cluster-node", "oc new-project dummy", "oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'", "oc debug node/my-cluster-node", "chroot /host", "toolbox", "sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1", "sos report --all-logs", "Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc adm must-gather --dest-dir /tmp/captures \\// <.> --source-dir '/tmp/tcpdump/' \\// <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\// <.> --node-selector 'node-role.kubernetes.io/worker' \\// <.> --host-network=true \\// <.> --timeout 30s \\// <.> -- tcpdump -i any \\// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300", "tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:30.pcap ├── ip- └── timestamp", "oc get nodes", "oc debug node/my-cluster-node", "chroot /host", "ip ad", "toolbox", "tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "chroot /host crictl ps", "chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'", "nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1", "chroot /host", "toolbox", "dnf install -y <package_name>", "chroot /host", "REGISTRY=quay.io 1 IMAGE=fedora/fedora:latest 2 TOOLBOX_NAME=toolbox-fedora-latest 3", "toolbox", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8", "oc describe clusterversion", "Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8", "ssh <user_name>@<load_balancer> systemctl status haproxy", "ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'", "ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'", "dig <wildcard_fqdn> @<dns_server>", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1", "./openshift-install create ignition-configs --dir=./install_dir", "tail -f ~/<installation_directory>/.openshift_install.log", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u crio", "ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service", "curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1", "grep -is 'bootstrap.ign' /var/log/httpd/access_log", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "curl -I http://<http_server_fqdn>:<port>/master.ign 1", "grep -is 'master.ign' /var/log/httpd/access_log", "oc get nodes", "oc describe node <master_node>", "oc get daemonsets -n openshift-ovn-kubernetes", "oc get pods -n openshift-ovn-kubernetes", "oc logs <ovn-k_pod> -n openshift-ovn-kubernetes", "oc get network.config.openshift.io cluster -o yaml", "./openshift-install create manifests", "oc get pods -n openshift-network-operator", "oc logs pod/<network_operator_pod_name> -n openshift-network-operator", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u crio", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "curl https://api-int.<cluster_name>:22623/config/master", "dig api-int.<cluster_name> @<dns_server>", "dig -x <load_balancer_mco_ip_address> @<dns_server>", "ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master", "ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking", "openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text", "oc get pods -n openshift-etcd", "oc get pods -n openshift-etcd-operator", "oc describe pod/<pod_name> -n <namespace>", "oc logs pod/<pod_name> -n <namespace>", "oc logs pod/<pod_name> -c <container_name> -n <namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'", "curl -I http://<http_server_fqdn>:<port>/worker.ign 1", "grep -is 'worker.ign' /var/log/httpd/access_log", "oc get nodes", "oc describe node <worker_node>", "oc get pods -n openshift-machine-api", "oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api", "oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator", "oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy", "oc adm node-logs --role=worker -u kubelet", "ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=worker -u crio", "ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "oc adm node-logs --role=worker --path=sssd", "oc adm node-logs --role=worker --path=sssd/sssd.log", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "curl https://api-int.<cluster_name>:22623/config/worker", "dig api-int.<cluster_name> @<dns_server>", "dig -x <load_balancer_mco_ip_address> @<dns_server>", "ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker", "ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking", "openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text", "oc get clusteroperators", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc describe clusteroperator <operator_name>", "oc get pods -n <operator_namespace>", "oc describe pod/<operator_pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -n <operator_namespace>", "oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>", "oc adm release info <image_path>:<tag> --commits", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "oc get nodes", "oc adm top nodes", "oc adm top node my-node", "oc debug node/my-node", "chroot /host", "systemctl is-active kubelet", "systemctl status kubelet", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc debug node/my-node", "chroot /host", "systemctl is-active crio", "systemctl status crio.service", "oc adm node-logs --role=master -u crio", "oc adm node-logs <node_name> -u crio", "ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory", "can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.", "oc adm cordon <node_name>", "oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data", "ssh [email protected] sudo -i", "systemctl stop kubelet", ".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done", "crictl rmp -fa", "systemctl stop crio", "crio wipe -f", "systemctl start crio systemctl start kubelet", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.30.3", "oc adm uncordon <node_name>", "NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.30.3", "rpm-ostree kargs --append='crashkernel=256M'", "systemctl enable kdump.service", "systemctl reboot", "variant: openshift version: 4.17.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" 6 KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true", "nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_bins /sbin/mount.nfs extra_modules nfs nfsv3 nfs_layout_nfsv41_files blocklayoutdriver nfs_layout_flexfiles nfs_layout_nfsv41_files", "butane 99-worker-kdump.bu -o 99-worker-kdump.yaml", "oc create -f 99-worker-kdump.yaml", "systemctl --failed", "journalctl -u <unit>.service", "NODEIP_HINT=192.0.2.1", "echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0", "Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration", "[connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20", "[connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20", "[connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy=\"layer3+4\" [ipv4] method=auto", "base64 <directory_path>/en01.config", "base64 <directory_path>/eno2.config", "base64 <directory_path>/bond1.config", "export ROLE=<machine_role>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-USD{ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600", "oc create -f <machine_config_file_name>", "bond1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root", "oc create -f <machine_config_file_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root", "oc create -f <machine_config_file_name>", "oc get nodes -o json | grep --color exgw-ip-addresses", "\"k8s.ovn.org/l3-gateway-config\": \\\"exgw-ip-address\\\":\\\"172.xx.xx.yy/24\\\",\\\"next-hops\\\":[\\\"xx.xx.xx.xx\\\"],", "oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep mtu | grep br-ex\"", "Starting pod/worker-1-debug To use host binaries, run `chroot /host` 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000", "oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep -A1 -E 'br-ex|bond0'", "Starting pod/worker-1-debug To use host binaries, run `chroot /host` sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex", "E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]", "oc debug node/<node_name>", "chroot /host", "ovs-appctl vlog/list", "console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO", "Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg", "systemctl daemon-reload", "systemctl restart ovs-vswitchd", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service", "oc apply -f 99-change-ovs-loglevel.yaml", "oc adm node-logs <node_name> -u ovs-vswitchd", "journalctl -b -f -u ovs-vswitchd.service", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc get clusteroperators", "oc get pod -n <operator_namespace>", "oc describe pod <operator_pod_name> -n <operator_namespace>", "oc debug node/my-node", "chroot /host", "crictl ps", "crictl ps --name network-operator", "oc get pods -n <operator_namespace>", "oc logs pod/<pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "true", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "false", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'", "oc get namespaces", "operator-ns-1 Terminating", "oc get crds", "oc delete crd <crd_name>", "oc get EtcdCluster -n <namespace_name>", "oc get EtcdCluster --all-namespaces", "oc delete <cr_name> <cr_instance_name> -n <namespace_name>", "oc get namespace <namespace_name>", "oc get sub,csv,installplan -n <namespace>", "oc project <project_name>", "oc get pods", "oc status", "skopeo inspect docker://<image_reference>", "oc edit deployment/my-deployment", "oc get pods -w", "oc get events", "oc logs <pod_name>", "oc logs <pod_name> -c <container_name>", "oc exec <pod_name> -- ls -alh /var/log", "total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp", "oc exec <pod_name> cat /var/log/<path_to_log>", "2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO", "oc exec <pod_name> -c <container_name> ls /var/log", "oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>", "oc project <namespace>", "oc rsh <pod_name> 1", "oc rsh -c <container_name> pod/<pod_name>", "oc port-forward <pod_name> <host_port>:<pod_port> 1", "oc get deployment -n <project_name>", "oc debug deployment/my-deployment --as-root -n <project_name>", "oc get deploymentconfigs -n <project_name>", "oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>", "oc cp <local_path> <pod_name>:/<path> -c <container_name> 1", "oc cp <pod_name>:/<path> -c <container_name> <local_path> 1", "oc get pods -w 1", "oc logs -f pod/<application_name>-<build_number>-build", "oc logs -f pod/<application_name>-<build_number>-deploy", "oc logs -f pod/<application_name>-<build_number>-<random_string>", "oc describe pod/my-app-1-akdlg", "oc logs -f pod/my-app-1-akdlg", "oc exec my-app-1-akdlg -- cat /var/log/my-application.log", "oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log", "oc exec -it my-app-1-akdlg /bin/bash", "oc debug node/my-cluster-node", "chroot /host", "crictl ps", "crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'", "nsenter -n -t 31150 -- ip ad", "Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4", "oc delete pod <old_pod> --force=true --grace-period=0", "oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator", "ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2", "oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}", "ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")", "oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}", "C:\\> net user <username> * 1", "oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods", "oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log", "oc adm node-logs -l kubernetes.io/os=windows --path=journal", "oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker", "C:\\> powershell", "C:\\> Get-EventLog -LogName Application -Source Docker", "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))", "topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')", "TOKEN=USD(oc whoami -t)", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"", "\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo \"[0-9|A-Z]{26}\")'", "308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B", "oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/", "Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod", "oc <command> --loglevel <log_level>", "oc whoami -t", "sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/support/index
Chapter 12. Certificates APIs
Chapter 12. Certificates APIs 12.1. Certificates APIs 12.1.1. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object 12.2. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object Required spec 12.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object CertificateSigningRequestSpec contains the certificate request. status object CertificateSigningRequestStatus contains conditions used to indicate approved/denied/failed status of the request, and the issued certificate. 12.2.1.1. .spec Description CertificateSigningRequestSpec contains the certificate request. Type object Required request signerName Property Type Description expirationSeconds integer expirationSeconds is the requested duration of validity of the issued certificate. The certificate signer may issue a certificate with a different validity duration so a client must check the delta between the notBefore and and notAfter fields in the issued certificate to determine the actual duration. The v1.22+ in-tree implementations of the well-known Kubernetes signers will honor this field as long as the requested duration is not greater than the maximum duration they will honor per the --cluster-signing-duration CLI flag to the Kubernetes controller manager. Certificate signers may not honor this field for various reasons: 1. Old signer that is unaware of the field (such as the in-tree implementations prior to v1.22) 2. Signer whose configured maximum is shorter than the requested duration 3. Signer whose configured minimum is longer than the requested duration The minimum valid value for expirationSeconds is 600, i.e. 10 minutes. extra object extra contains extra attributes of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. extra{} array (string) groups array (string) groups contains group membership of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. request string request contains an x509 certificate signing request encoded in a "CERTIFICATE REQUEST" PEM block. When serialized as JSON or YAML, the data is additionally base64-encoded. signerName string signerName indicates the requested signer, and is a qualified name. List/watch requests for CertificateSigningRequests can filter on this field using a "spec.signerName=NAME" fieldSelector. Well-known Kubernetes signers are: 1. "kubernetes.io/kube-apiserver-client": issues client certificates that can be used to authenticate to kube-apiserver. Requests for this signer are never auto-approved by kube-controller-manager, can be issued by the "csrsigning" controller in kube-controller-manager. 2. "kubernetes.io/kube-apiserver-client-kubelet": issues client certificates that kubelets use to authenticate to kube-apiserver. Requests for this signer can be auto-approved by the "csrapproving" controller in kube-controller-manager, and can be issued by the "csrsigning" controller in kube-controller-manager. 3. "kubernetes.io/kubelet-serving" issues serving certificates that kubelets use to serve TLS endpoints, which kube-apiserver can connect to securely. Requests for this signer are never auto-approved by kube-controller-manager, and can be issued by the "csrsigning" controller in kube-controller-manager. More details are available at https://k8s.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers Custom signerNames can also be specified. The signer defines: 1. Trust distribution: how trust (CA bundles) are distributed. 2. Permitted subjects: and behavior when a disallowed subject is requested. 3. Required, permitted, or forbidden x509 extensions in the request (including whether subjectAltNames are allowed, which types, restrictions on allowed values) and behavior when a disallowed extension is requested. 4. Required, permitted, or forbidden key usages / extended key usages. 5. Expiration/certificate lifetime: whether it is fixed by the signer, configurable by the admin. 6. Whether or not requests for CA certificates are allowed. uid string uid contains the uid of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. usages array (string) usages specifies a set of key usages requested in the issued certificate. Requests for TLS client certificates typically request: "digital signature", "key encipherment", "client auth". Requests for TLS serving certificates typically request: "key encipherment", "digital signature", "server auth". Valid values are: "signing", "digital signature", "content commitment", "key encipherment", "key agreement", "data encipherment", "cert sign", "crl sign", "encipher only", "decipher only", "any", "server auth", "client auth", "code signing", "email protection", "s/mime", "ipsec end system", "ipsec tunnel", "ipsec user", "timestamping", "ocsp signing", "microsoft sgc", "netscape sgc" username string username contains the name of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. 12.2.1.2. .spec.extra Description extra contains extra attributes of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. Type object 12.2.1.3. .status Description CertificateSigningRequestStatus contains conditions used to indicate approved/denied/failed status of the request, and the issued certificate. Type object Property Type Description certificate string certificate is populated with an issued certificate by the signer after an Approved condition is present. This field is set via the /status subresource. Once populated, this field is immutable. If the certificate signing request is denied, a condition of type "Denied" is added and this field remains empty. If the signer cannot issue the certificate, a condition of type "Failed" is added and this field remains empty. Validation requirements: 1. certificate must contain one or more PEM blocks. 2. All PEM blocks must have the "CERTIFICATE" label, contain no headers, and the encoded data must be a BER-encoded ASN.1 Certificate structure as described in section 4 of RFC5280. 3. Non-PEM content may appear before or after the "CERTIFICATE" PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468. If more than one PEM block is present, and the definition of the requested spec.signerName does not indicate otherwise, the first block is the issued certificate, and subsequent blocks should be treated as intermediate certificates and presented in TLS handshakes. The certificate is encoded in PEM format. When serialized as JSON or YAML, the data is additionally base64-encoded, so it consists of: base64( -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- ) conditions array conditions applied to the request. Known conditions are "Approved", "Denied", and "Failed". conditions[] object CertificateSigningRequestCondition describes a condition of a CertificateSigningRequest object 12.2.1.4. .status.conditions Description conditions applied to the request. Known conditions are "Approved", "Denied", and "Failed". Type array 12.2.1.5. .status.conditions[] Description CertificateSigningRequestCondition describes a condition of a CertificateSigningRequest object Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime is the time the condition last transitioned from one status to another. If unset, when a new condition type is added or an existing condition's status is changed, the server defaults this to the current time. lastUpdateTime Time lastUpdateTime is the time of the last update to this condition message string message contains a human readable message with details about the request state reason string reason indicates a brief reason for the request state status string status of the condition, one of True, False, Unknown. Approved, Denied, and Failed conditions may not be "False" or "Unknown". type string type of the condition. Known conditions are "Approved", "Denied", and "Failed". An "Approved" condition is added via the /approval subresource, indicating the request was approved and should be issued by the signer. A "Denied" condition is added via the /approval subresource, indicating the request was denied and should not be issued by the signer. A "Failed" condition is added via the /status subresource, indicating the signer failed to issue the certificate. Approved and Denied conditions are mutually exclusive. Approved, Denied, and Failed conditions cannot be removed once added. Only one condition of a given type is allowed. 12.2.2. API endpoints The following API endpoints are available: /apis/certificates.k8s.io/v1/certificatesigningrequests DELETE : delete collection of CertificateSigningRequest GET : list or watch objects of kind CertificateSigningRequest POST : create a CertificateSigningRequest /apis/certificates.k8s.io/v1/watch/certificatesigningrequests GET : watch individual changes to a list of CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name} DELETE : delete a CertificateSigningRequest GET : read the specified CertificateSigningRequest PATCH : partially update the specified CertificateSigningRequest PUT : replace the specified CertificateSigningRequest /apis/certificates.k8s.io/v1/watch/certificatesigningrequests/{name} GET : watch changes to an object of kind CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/status GET : read status of the specified CertificateSigningRequest PATCH : partially update status of the specified CertificateSigningRequest PUT : replace status of the specified CertificateSigningRequest /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/approval GET : read approval of the specified CertificateSigningRequest PATCH : partially update approval of the specified CertificateSigningRequest PUT : replace approval of the specified CertificateSigningRequest 12.2.2.1. /apis/certificates.k8s.io/v1/certificatesigningrequests Table 12.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CertificateSigningRequest Table 12.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 12.3. Body parameters Parameter Type Description body DeleteOptions schema Table 12.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CertificateSigningRequest Table 12.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.6. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequestList schema 401 - Unauthorized Empty HTTP method POST Description create a CertificateSigningRequest Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.8. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 12.9. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 202 - Accepted CertificateSigningRequest schema 401 - Unauthorized Empty 12.2.2.2. /apis/certificates.k8s.io/v1/watch/certificatesigningrequests Table 12.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 12.2.2.3. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name} Table 12.12. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest Table 12.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CertificateSigningRequest Table 12.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 12.15. Body parameters Parameter Type Description body DeleteOptions schema Table 12.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CertificateSigningRequest Table 12.17. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CertificateSigningRequest Table 12.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 12.19. Body parameters Parameter Type Description body Patch schema Table 12.20. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CertificateSigningRequest Table 12.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.22. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 12.23. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty 12.2.2.4. /apis/certificates.k8s.io/v1/watch/certificatesigningrequests/{name} Table 12.24. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest Table 12.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 12.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 12.2.2.5. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/status Table 12.27. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest Table 12.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CertificateSigningRequest Table 12.29. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CertificateSigningRequest Table 12.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 12.31. Body parameters Parameter Type Description body Patch schema Table 12.32. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CertificateSigningRequest Table 12.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.34. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 12.35. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty 12.2.2.6. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/approval Table 12.36. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest Table 12.37. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read approval of the specified CertificateSigningRequest Table 12.38. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update approval of the specified CertificateSigningRequest Table 12.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 12.40. Body parameters Parameter Type Description body Patch schema Table 12.41. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace approval of the specified CertificateSigningRequest Table 12.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.43. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 12.44. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/certificates-apis-1
Preface
Preface Automation hub distributes certified, supported collections from partners to customers. Each collection includes content such as modules, roles, plugins and documentation. The first time you upload a collection to automation hub, our Partner Engineering team will start reviewing it for certification. You can manage your collections by uploading or deleting collections using the automation hub user interface or the ansible-galaxy client.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/uploading_content_to_red_hat_automation_hub/pr01
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 7.1-1 Wed Aug 7 2019 Steven Levine Preparing document for 7.7 GA publication. Revision 6.1-2 Thu Oct 4 2018 Steven Levine Preparing document for 7.6 GA publication. Revision 5.1-2 Wed Mar 14 2018 Steven Levine Preparing document for 7.5 GA publication. Revision 5.1-1 Wed Dec 13 2017 Steven Levine Preparing document for 7.5 Beta publication. Revision 4.1-3 Tue Aug 1 2017 Steven Levine Document version for 7.4 GA publication. Revision 4.1-1 Wed May 10 2017 Steven Levine Preparing document for 7.4 Beta publication. Revision 3.1-2 Mon Oct 17 2016 Steven Levine Version for 7.3 GA publication. Revision 3.1-1 Wed Aug 17 2016 Steven Levine Preparing document for 7.3 Beta publication. Revision 2.1-5 Mon Nov 9 2015 Steven Levine Preparing document for 7.2 GA publication. Revision 2.1-1 Tue Aug 18 2015 Steven Levine Preparing document for 7.2 Beta publication. Revision 1.1-3 Tue Feb 17 2015 Steven Levine Version for 7.1 GA Revision 1.1-1 Thu Dec 04 2014 Steven Levine Version for 7.1 Beta Release Revision 0.1-9 Tue Jun 03 2014 John Ha Version for 7.0 GA Release Revision 0.1-4 Wed Nov 27 2013 John Ha Build for Beta of Red Hat Enterprise Linux 7 Revision 0.1-1 Wed Jan 16 2013 Steven Levine First version for Red Hat Enterprise Linux 7
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/appe-publican-revision_history
Chapter 17. Replacing storage devices
Chapter 17. Replacing storage devices 17.1. Replacing operational or failed storage devices on Google Cloud installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Google Cloud installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Google Cloud installer-provisioned infrastructure Replacing failed nodes on Google Cloud installer-provisioned infrastructures .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/replacing-storage-devices_rhodf
Chapter 1. Overview
Chapter 1. Overview Version 4 of the Python software development kit is a collection of classes that allows you to interact with the Red Hat Virtualization Manager in Python-based projects. By downloading these classes and adding them to your project, you can access a range of functionality for high-level automation of administrative tasks. Note Version 3 of the SDK is no longer supported. For more information, consult the RHV 4.3 version of this guide . Python 3.7 and async In Python 3.7 and later versions, async is a reserved keyword. You cannot use the async parameter in methods of services that previously supported it, as in the following example, because async=True will cause an error: dc = dc_service.update( types.DataCenter( description='Updated description', ), async=True, ) The solution is to add an underscore to the parameter ( async_ ): dc = dc_service.update( types.DataCenter( description='Updated description', ), async_=True, ) Note This limitation applies only to Python 3.7 and later. Earlier versions of Python do not require this modification. 1.1. Prerequisites To install the Python software development kit, you must have: A system where Red Hat Enterprise Linux 8 is installed. Both the Server and Workstation variants are supported. A subscription to Red Hat Virtualization entitlements. Important The software development kit is an interface for the Red Hat Virtualization REST API. Use the version of the software development kit that corresponds to the version of your Red Hat Virtualization environment. For example, if you are using Red Hat Virtualization 4.3, use V4 Python software development kit. 1.2. Installing the Python Software Development Kit To install the Python software development kit: Enable the repositories that are appropriate for your hardware platform . For example, for x86-64 hardware, enable: # subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms # subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms # subscription-manager release --set=8.6 Install the required packages: # dnf install python3-ovirt-engine-sdk4 The Python software development kit is installed into the Python 3 site-packages directory, and the accompanying documentation and example are installed to /usr/share/doc/python3-ovirt-engine-sdk4 .
[ "dc = dc_service.update( types.DataCenter( description='Updated description', ), async=True, )", "dc = dc_service.update( types.DataCenter( description='Updated description', ), async_=True, )", "subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms subscription-manager release --set=8.6", "dnf install python3-ovirt-engine-sdk4" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/python_sdk_guide/chap-overview
Chapter 20. JbodStorage schema reference
Chapter 20. JbodStorage schema reference Used in: KafkaClusterSpec , KafkaNodePoolSpec The type property is a discriminator that distinguishes use of the JbodStorage type from EphemeralStorage , PersistentClaimStorage . It must have the value jbod for the type JbodStorage . Property Description type Must be jbod . string volumes List of volumes as Storage objects representing the JBOD disks array. EphemeralStorage , PersistentClaimStorage array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-JbodStorage-reference
Chapter 7. Installing a cluster on GCP into an existing VPC
Chapter 7. Installing a cluster on GCP into an existing VPC In OpenShift Container Platform version 4.18, you can install a cluster into an existing Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 7.2. About using a custom VPC In OpenShift Container Platform 4.18, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in Google Cloud Platform (GCP). By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking for the subnets. 7.2.1. Requirements for using your VPC The union of the VPC CIDR block and the machine network CIDR must be non-empty. The subnets must be within the machine network. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide one subnet for control-plane machines and one subnet for compute machines. The subnet's CIDRs belong to the machine CIDR that you specified. 7.2.3. Division of permissions Some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. 7.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 7.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Note Not all instance types are available in all regions and zones. For a detailed breakdown of which instance types are available in which zones, see regions and zones (Google documentation). Some instance types require the use of Hyperdisk storage. If you use an instance type that requires Hyperdisk storage, all of the nodes in your cluster must support Hyperdisk storage, and you must change the default storage class to use Hyperdisk storage. For more information, see machine series support for Hyperdisk (Google documentation). For instructions on modifying storage classes, see the "GCE PersistentDisk (gcePD) object definition" section in the Dynamic Provisioning page in Storage . Example 7.1. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 7.6.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 7.2. Machine series for 64-bit ARM machines C4A Tau T2A 7.6.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 7.6.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 7.6.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 7.6.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 1 15 17 18 24 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 7.6.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 7.6.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 7.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 7.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 7.8.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 7.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the ccoctl utility uses: The IAM Workload Identity Pool Admin role The following granular permissions: compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.8.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. If you plan to install the GCP Filestore Container Storage Interface (CSI) Driver Operator, retain this value. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 7.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 7.4. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_gcp/installing-gcp-vpc
Chapter 7. Developing collections
Chapter 7. Developing collections Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. Red Hat provides Ansible Content Collections on Ansible automation hub that contain both Red Hat Ansible Certified Content and Ansible validated content. If you have installed private automation hub, you can create collections for your organization and push them to private automation hub so that you can use them in job templates in Ansible Automation Platform. You can use collections to package and distribute plug-ins. These plug-ins are written in Python. You can also create collections to package and distribute Ansible roles, which are expressed in YAML. You can also include playbooks and custom plug-ins that are required for these roles in the collection. Typically, collections of roles are distributed for use within your organization.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/developing_automation_content/devtools-develop-collections_develop-automation-content
Chapter 3. Manually upgrading using the roxctl CLI
Chapter 3. Manually upgrading using the roxctl CLI You can upgrade to the latest version of Red Hat Advanced Cluster Security for Kubernetes (RHACS) from a supported older version. Important You need to perform the manual upgrade procedure only if you used the roxctl CLI to install RHACS. There are manual steps for each version upgrade that must be followed, for example, from version 3.74 to version 4.0, and from version 4.0 to version 4.1. Therefore, Red Hat recommends upgrading first from 3.74 to 4.0, then from 4.0 to 4.1, then 4.1 to 4.2, until the selected version is installed. For full functionality, Red Hat recommends upgrading to the most recent version. To upgrade RHACS to the latest version, perform the following steps: Backup the Central database Upgrade the roxctl CLI Upgrade the Central cluster Upgrade all secured clusters 3.1. Backing up the Central database You can back up the Central database and use that backup for rolling back from a failed upgrade or data restoration in the case of an infrastructure disaster. Prerequisites You must have an API token with read permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. The Analyst system role has read permissions for all resources. You have installed the roxctl CLI. You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables. Procedure Run the backup command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" central backup Additional resources Authenticating by using the roxctl CLI 3.2. Upgrading the roxctl CLI To upgrade the roxctl CLI to the latest version you must uninstall the existing version of roxctl CLI and then install the latest version of the roxctl CLI. 3.2.1. Uninstalling the roxctl CLI You can uninstall the roxctl CLI binary on Linux by using the following procedure. Procedure Find and delete the roxctl binary: USD ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1 1 Depending on your environment, you might need administrator rights to delete the roxctl binary. 3.2.2. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 3.2.3. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 3.2.4. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 3.3. Upgrading the Central cluster After you have created a backup of the Central database and generated the necessary resources by using the provisioning bundle, the step is to upgrade the Central cluster. This process involves upgrading Central and Scanner. 3.3.1. Upgrading Central You can update Central to the latest version by downloading and deploying the updated images. Procedure Run the following command to update the Central image: USD oc -n stackrox set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1 1 If you use Kubernetes, enter kubectl instead of oc . Verification Verify that the new pods have deployed: USD oc get deploy -n stackrox -o wide USD oc get pod -n stackrox --watch 3.3.1.1. Editing the GOMEMLIMIT environment variable for the Central deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Central deployment: USD oc -n stackrox edit deploy/central 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.3.2. Upgrading Scanner You can update Scanner to the latest version by downloading and deploying the updated images. Procedure Run the following command to update the Scanner image: USD oc -n stackrox set image deploy/scanner scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.6.3 1 1 If you use Kubernetes, enter kubectl instead of oc . Verification Verify that the new pods have deployed: USD oc get deploy -n stackrox -o wide USD oc get pod -n stackrox --watch 3.3.2.1. Editing the GOMEMLIMIT environment variable for the Scanner deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Scanner deployment: USD oc -n stackrox edit deploy/scanner 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.3.3. Verifying the Central cluster upgrade After you have upgraded both Central and Scanner, verify that the Central cluster upgrade is complete. Procedure Check the Central logs by running the following command: USD oc logs -n stackrox deploy/central -c central 1 1 If you use Kubernetes, enter kubectl instead of oc . Sample output of a successful upgrade No database restore directory found (this is not an error). Migrator: 2023/04/19 17:58:54: starting DB compaction Migrator: 2023/04/19 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact badger 2023/04/19 17:58:54 INFO: All 1 tables opened in 2ms badger 2023/04/19 17:58:55 INFO: Replaying file id: 0 at offset: 846357 badger 2023/04/19 17:58:55 INFO: Replay took: 50.324ms badger 2023/04/19 17:58:55 DEBUG: Value log discard stats empty Migrator: 2023/04/19 17:58:55: DB is up to date. Nothing to do here. badger 2023/04/19 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]} version: 2023/04/19 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We're good to go! 3.4. Upgrading all secured clusters After upgrading Central services, you must upgrade all secured clusters. Important If you are using automatic upgrades: Update all your secured clusters by using automatic upgrades. For information about troubleshooting problems with the automatic cluster upgrader, see Troubleshooting the cluster upgrader . Skip the instructions in this section and follow the instructions in the Verify upgrades and Revoking the API token sections. If you are not using automatic upgrades, you must run the instructions in this section on all secured clusters including the Central cluster. To ensure optimal functionality, use the same RHACS version for your secured clusters and the cluster on which Central is installed. To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow the instructions in this section. 3.4.1. Updating other images You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades. Note If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure. Procedure Update the Sensor image: USD oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Compliance image: USD oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Collector image: USD oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.6.3 1 1 If you use Kubernetes, enter kubectl instead of oc . Note If you are using the collector slim image, run the following command instead: USD oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version} Update the admission control image: USD oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 Important If you have installed RHACS on Red Hat OpenShift by using the roxctl CLI, you need to migrate the security context constraints (SCCs). For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section. 3.4.2. Adding POD_NAMESPACE to sensor and admission-control deployments When upgrading to version 4.6 or later from a version earlier than 4.6, you must patch the sensor and admission-control deployments to set the POD_NAMESPACE environment variable. Note If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure. Procedure Patch sensor to ensure POD_NAMESPACE is set by running the following command: USD [[ -z "USD(oc -n stackrox get deployment sensor -o yaml | grep POD_NAMESPACE)" ]] && oc -n stackrox patch deployment sensor --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]' Patch admission-control to ensure POD_NAMESPACE is set by running the following command: USD [[ -z "USD(oc -n stackrox get deployment admission-control -o yaml | grep POD_NAMESPACE)" ]] && oc -n stackrox patch deployment admission-control --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]' steps Verifying secured cluster upgrade Additional resources Migrating SCCs during the manual upgrade 3.4.3. Migrating SCCs during the manual upgrade By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters. Procedure List all of the RHACS services that are deployed on Central and all secured clusters: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Example output Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control #... Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector #... Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh #... Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #... In this example, you can see that each pod has its own custom SCC, which is specified through the openshift.io/scc field. Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs. To add the required roles and role bindings to use the Red Hat OpenShift SCCs for the Central cluster, complete the following steps: Create a file named update-central.yaml that defines the role and role binding resources by using the following content: Example 3.1. Example YAML file apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-db-scc 2 namespace: stackrox 3 Rules: 4 - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-scanner-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.k ubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-db-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-db-scc subjects: 8 - kind: ServiceAccount name: central-db namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-scc subjects: - kind: ServiceAccount name: central namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: scanner-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-scanner-scc subjects: - kind: ServiceAccount name: scanner namespace: stackrox - - - 1 The type of Kubernetes resource, in this example, Role . 2 The name of the role resource. 3 The namespace in which the role is created. 4 Describes the permissions granted by the role resource. 5 The type of Kubernetes resource, in this example, RoleBinding . 6 The name of the role binding resource. 7 Specifies the role to bind in the same namespace. 8 Specifies the subjects that are bound to the role. Create the role and role binding resources specified in the update-central.yaml file by running the following command: USD oc -n stackrox create -f ./update-central.yaml To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps: Create a file named upgrade-scs.yaml that defines the role and role binding resources by using the following content: Example 3.2. Example YAML file apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - - 1 The type of Kubernetes resource, in this example, Role . 2 The name of the role resource. 3 The namespace in which the role is created. 4 Describes the permissions granted by the role resource. 5 The type of Kubernetes resource, in this example, RoleBinding . 6 The name of the role binding resource. 7 Specifies the role to bind in the same namespace. 8 Specifies the subjects that are bound to the role. Create the role and role binding resources specified in the upgrade-scs.yaml file by running the following command: USD oc -n stackrox create -f ./update-scs.yaml Important You must run this command on each secured cluster to create the role and role bindings specified in the upgrade-scs.yaml file. Delete the SCCs that are specific to RHACS: To delete the SCCs that are specific to the Central cluster, run the following command: USD oc delete scc/stackrox-central scc/stackrox-central-db scc/stackrox-scanner To delete the SCCs that are specific to all secured clusters, run the following command: USD oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor Important You must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster. Verification Ensure that all the pods are using the correct SCCs by running the following command: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Compare the output with the following table: Component custom SCC New Red Hat OpenShift 4 SCC Central stackrox-central nonroot-v2 Central-db stackrox-central-db nonroot-v2 Scanner stackrox-scanner nonroot-v2 Scanner-db stackrox-scanner nonroot-v2 Admission Controller stackrox-admission-control restricted-v2 Collector stackrox-collector privileged Sensor stackrox-sensor restricted-v2 3.4.3.1. Editing the GOMEMLIMIT environment variable for the Sensor deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Sensor deployment: USD oc -n stackrox edit deploy/sensor 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.4.3.2. Editing the GOMEMLIMIT environment variable for the Collector deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Collector deployment: USD oc -n stackrox edit deploy/collector 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.4.3.3. Editing the GOMEMLIMIT environment variable for the Admission Controller deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Admission Controller deployment: USD oc -n stackrox edit deploy/admission-control 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.4.3.4. Verifying secured cluster upgrade After you have upgraded secured clusters, verify that the updated pods are working. Procedure Check that the new pods have deployed: USD oc get deploy,ds -n stackrox -o wide 1 1 If you use Kubernetes, enter kubectl instead of oc . USD oc get pod -n stackrox --watch 1 1 If you use Kubernetes, enter kubectl instead of oc . 3.5. Enabling RHCOS node scanning with the StackRox Scanner If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner". Procedure Run one of the following commands to update the compliance container. For a default compliance container with metrics disabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' For a compliance container with Prometheus metrics enabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' Update the Collector DaemonSet (DS) by taking the following steps: Add new volume mounts to Collector DS by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}' Add the new NodeScanner container by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.6.3","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}' Additional resources Scanning RHCOS node hosts 3.6. Rolling back Central You can roll back to a version of Central if the upgrade to a new version is unsuccessful. 3.6.1. Rolling back Central normally You can roll back to a version of Central if upgrading Red Hat Advanced Cluster Security for Kubernetes fails. Prerequisites Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you might not be able to roll back to an earlier version. Procedure Run the following command to roll back to a version when an upgrade fails (before the Central service starts): USD oc -n stackrox rollout undo deploy/central 1 1 If you use Kubernetes, enter kubectl instead of oc . 3.6.2. Rolling back Central forcefully You can use forced rollback to roll back to an earlier version of Central (after the Central service starts). Important Using forced rollback to switch back to a version might result in loss of data and functionality. Prerequisites Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you will not be able to roll back to an earlier version. Procedure Run the following commands to perform a forced rollback: To forcefully rollback to the previously installed version: USD oc -n stackrox rollout undo deploy/central 1 1 If you use Kubernetes, enter kubectl instead of oc . To forcefully rollback to a specific version: Edit Central's ConfigMap : USD oc -n stackrox edit configmap/central-config 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the value of the maintenance.forceRollbackVersion key: data: central-config.yaml: | maintenance: safeMode: false compaction: enabled: true bucketFillFraction: .5 freeFractionThreshold: 0.75 forceRollbackVersion: <x.x.x.x> 1 ... 1 Specify the version that you want to roll back to. Update the Central image version: USD oc -n stackrox \ 1 set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<x.x.x.x> 2 1 If you use Kubernetes, enter kubectl instead of oc . 2 Specify the version that you want to roll back to. It must be the same version that you specified for the maintenance.forceRollbackVersion key in the central-config config map. 3.7. Verifying upgrades The updated Sensors and Collectors continue to report the latest data from each secured cluster. The last time Sensor contacted Central is visible in the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration System Health . Check to ensure that Sensor Upgrade shows clusters up to date with Central. 3.8. Revoking the API token For security reasons, Red Hat recommends that you revoke the API token that you have used to complete Central database backup. Prerequisites After the upgrade, you must reload the RHACS portal page and re-accept the certificate to continue using the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Authentication Tokens category, and click API Token . Select the checkbox in front of the token name that you want to revoke. Click Revoke . On the confirmation dialog box, click Confirm . 3.9. Troubleshooting the cluster upgrader If you encounter problems when using the legacy installation method for the secured cluster and enabling the automated updates, you can try troubleshooting the problem. The following errors can be found in the clusters view when the upgrader fails. 3.9.1. Upgrader is missing permissions Symptom The following error is displayed in the cluster page: Upgrader failed to execute PreflightStage of the roll-forward workflow: executing stage "Run preflight checks": preflight check "Kubernetes authorization" reported errors. This usually means that access is denied. Have you configured this Secured Cluster for automatically receiving upgrades?" Procedure Ensure that the bundle for the secured cluster was generated with future upgrades enabled before clicking Download YAML file and keys . If possible, remove that secured cluster and generate a new bundle making sure that future upgrades are enabled. If you cannot re-create the cluster, you can take these actions: Ensure that the service account sensor-upgrader exists in the same namespace as Sensor. Ensure that a ClusterRoleBinding exists (default name: <namespace>:upgrade-sensors ) that grants the cluster-admin ClusterRole to the sensor-upgrader service account. 3.9.2. Upgrader cannot start due to missing image Symptom The following error is displayed in the cluster page: "Upgrade initialization error: The upgrader pods have trouble pulling the new image: Error pulling image: (...) (<image_reference:tag>: not found)" Procedure Ensure that the Secured Cluster can access the registry and pull the image <image_reference:tag> . Ensure that the image pull secrets are configured correctly in the secured cluster. 3.9.3. Upgrader cannot start due to an unknown reason Symptom The following error is displayed in the cluster page: "Upgrade initialization error: Pod terminated: (Error)" Procedure Ensure that the upgrader has enough permissions for accessing the cluster objects. For more information, see "Upgrader is missing permissions". Check the upgrader logs for more insights. 3.9.3.1. Obtaining upgrader logs The logs can be accessed by running the following command: USD kubectl -n <namespace> logs deploy/sensor-upgrader 1 1 For <namespace> , specify the namespace in which Sensor is running. Usually, the upgrader deployment is only running in the cluster for a short time while doing the upgrades. It is removed later, so accessing its logs using the orchestrator CLI can require proper timing.
[ "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central backup", "ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe", "roxctl version", "oc -n stackrox set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1", "oc get deploy -n stackrox -o wide", "oc get pod -n stackrox --watch", "oc -n stackrox edit deploy/central 1", "oc -n stackrox set image deploy/scanner scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.6.3 1", "oc get deploy -n stackrox -o wide", "oc get pod -n stackrox --watch", "oc -n stackrox edit deploy/scanner 1", "oc logs -n stackrox deploy/central -c central 1", "No database restore directory found (this is not an error). Migrator: 2023/04/19 17:58:54: starting DB compaction Migrator: 2023/04/19 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact badger 2023/04/19 17:58:54 INFO: All 1 tables opened in 2ms badger 2023/04/19 17:58:55 INFO: Replaying file id: 0 at offset: 846357 badger 2023/04/19 17:58:55 INFO: Replay took: 50.324ms badger 2023/04/19 17:58:55 DEBUG: Value log discard stats empty Migrator: 2023/04/19 17:58:55: DB is up to date. Nothing to do here. badger 2023/04/19 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]} version: 2023/04/19 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We're good to go!", "oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1", "oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 1", "oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.6.3 1", "oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version}", "oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3", "[[ -z \"USD(oc -n stackrox get deployment sensor -o yaml | grep POD_NAMESPACE)\" ]] && oc -n stackrox patch deployment sensor --type=json -p '[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/env/-\",\"value\":{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}}]'", "[[ -z \"USD(oc -n stackrox get deployment admission-control -o yaml | grep POD_NAMESPACE)\" ]] && oc -n stackrox patch deployment admission-control --type=json -p '[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/env/-\",\"value\":{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}}]'", "oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'", "Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control # Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector # Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh # Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-db-scc 2 namespace: stackrox 3 Rules: 4 - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-scanner-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.k ubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-db-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-db-scc subjects: 8 - kind: ServiceAccount name: central-db namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-scc subjects: - kind: ServiceAccount name: central namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: scanner-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-scanner-scc subjects: - kind: ServiceAccount name: scanner namespace: stackrox - - -", "oc -n stackrox create -f ./update-central.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - -", "oc -n stackrox create -f ./update-scs.yaml", "oc delete scc/stackrox-central scc/stackrox-central-db scc/stackrox-scanner", "oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor", "oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'", "oc -n stackrox edit deploy/sensor 1", "oc -n stackrox edit deploy/collector 1", "oc -n stackrox edit deploy/admission-control 1", "oc get deploy,ds -n stackrox -o wide 1", "oc get pod -n stackrox --watch 1", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.6.3\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'", "oc -n stackrox rollout undo deploy/central 1", "oc -n stackrox rollout undo deploy/central 1", "oc -n stackrox edit configmap/central-config 1", "data: central-config.yaml: | maintenance: safeMode: false compaction: enabled: true bucketFillFraction: .5 freeFractionThreshold: 0.75 forceRollbackVersion: <x.x.x.x> 1", "oc -n stackrox \\ 1 set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<x.x.x.x> 2", "Upgrader failed to execute PreflightStage of the roll-forward workflow: executing stage \"Run preflight checks\": preflight check \"Kubernetes authorization\" reported errors. This usually means that access is denied. Have you configured this Secured Cluster for automatically receiving upgrades?\"", "\"Upgrade initialization error: The upgrader pods have trouble pulling the new image: Error pulling image: (...) (<image_reference:tag>: not found)\"", "\"Upgrade initialization error: Pod terminated: (Error)\"", "kubectl -n <namespace> logs deploy/sensor-upgrader 1" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/upgrading/upgrade-roxctl
Chapter 2. Differences from upstream OpenJDK 17
Chapter 2. Differences from upstream OpenJDK 17 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 17 changes: FIPS support. Red Hat build of OpenJDK 17 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 17 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 17 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/getting_started_with_red_hat_build_of_openjdk_17/rn-openjdk-diff-from-upstream
2.4. Configuring max_luns
2.4. Configuring max_luns If RAID storage in your cluster presents multiple LUNs (Logical Unit Numbers), each cluster node must be able to access those LUNs. To enable access to all LUNs presented, configure max_luns in the /etc/modprobe.conf file of each node as follows: Open /etc/modprobe.conf with a text editor. Append the following line to /etc/modprobe.conf . Set N to the highest numbered LUN that is presented by RAID storage. For example, with the following line appended to the /etc/modprobe.conf file, a node can access LUNs numbered as high as 255: Save /etc/modprobe.conf . Run mkinitrd to rebuild initrd for the currently running kernel as follows. Set the kernel variable to the currently running kernel: For example, the currently running kernel in the following mkinitrd command is 2.6.9-34.0.2.EL: Note You can determine the currently running kernel by running uname -r . Restart the node.
[ "options scsi_mod max_luns= N", "options scsi_mod max_luns=255", "cd /boot mkinitrd -f -v initrd- kernel .img kernel", "mkinitrd -f -v initrd-2.6.9-34.0.2.EL.img 2.6.9-34.0.2.EL" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-max-luns-ca
Storage Strategies Guide
Storage Strategies Guide Red Hat Ceph Storage 5 Creating storage strategies for Red Hat Ceph Storage clusters Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/storage_strategies_guide/index
2.2. Support Limits and Removals
2.2. Support Limits and Removals Read this section to understand which technologies are no longer supported, or support reduced functionality, in Red Hat Gluster Storage 3.5. gluster-swift (Object Store) Using Red Hat Gluster Storage as an object store for Red Hat OpenStack Platform is no longer supported. gstatus Red Hat Gluster Storage 3.5 does not support gstatus. Heketi for standalone Red Hat Gluster Storage Heketi is supported only for use with OpenShift Container Storage. It is not supported for general use with Red Hat Gluster Storage. Nagios Monitoring Red Hat Gluster Storage 3.5 does not support monitoring using Nagios. Red Hat Gluster Storage Web Administration is now the recommended monitoring tool for Red Hat Storage Gluster clusters. Red Hat Atomic Host Red Hat Atomic Host is not supported for use with Red Hat Gluster Storage 3.5. Red Hat Gluster Storage Console Red Hat Gluster Storage Console is not supported for use with Red Hat Gluster Storage 3.5. Red Hat Gluster Storage Web Administration is now the recommended monitoring tool for Red Hat Storage Gluster clusters. Virtual Disk Optimizer (VDO) Virtual Disk Optimizer (VDO) volumes are only supported as part of a Red Hat Hyperconverged Infrastructure for Virtualization 2.0 deployment. VDO is not supported with other Red Hat Gluster Storage deployments.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/3.5_release_notes/support-limits-and-removals
Chapter 130. KafkaBridgeProducerSpec schema reference
Chapter 130. KafkaBridgeProducerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeProducerSpec schema properties Configures producer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Properties with the following prefixes cannot be set: bootstrap.servers sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Bridge, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Example Kafka Bridge producer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: config: acks: 1 delivery.timeout.ms: 300000 # ... Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge deployment might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. 130.1. KafkaBridgeProducerSpec schema properties Property Property type Description config map The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # producer: config: acks: 1 delivery.timeout.ms: 300000 #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkabridgeproducerspec-reference
20.2.4. Using a Prepared FCP-attached SCSI Disk
20.2.4. Using a Prepared FCP-attached SCSI Disk Double-click Load . In the dialog box that follows, select SCSI as the Load type . As Load address fill in the device number of the FCP channel connected with the SCSI disk. As World wide port name fill in the WWPN of the storage system containing the disk as a 16-digit hexadecimal number. As Logical unit number fill in the LUN of the disk as a 16-digit hexadecimal number. As Boot program selector fill in the number corresponding the zipl boot menu entry that you prepared for booting the Red Hat Enterprise Linux installer. Leave the Boot record logical block address as 0 and the Operating system specific load parameters empty. Click the OK button.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-s390-steps-boot-installing_in_an_lpar-scsi
Deploying OpenShift Data Foundation using Amazon Web Services
Deploying OpenShift Data Foundation using Amazon Web Services Red Hat OpenShift Data Foundation 4.15 Instructions for deploying OpenShift Data Foundation using Amazon Web Services for cloud storage Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Amazon Web Services.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_amazon_web_services/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/basic_authentication/making-open-source-more-inclusive
Chapter 10. GenericSecretSource schema reference
Chapter 10. GenericSecretSource schema reference Used in: KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationCustom , KafkaListenerAuthenticationOAuth Property Description key The key under which the secret value is stored in the OpenShift Secret. string secretName The name of the OpenShift Secret containing the secret value. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-GenericSecretSource-reference
7.7. Importing Templates
7.7. Importing Templates 7.7.1. Importing a Template into a Data Center Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See the Importing Existing Storage Domains section in the Red Hat Virtualization Administration Guide for information on importing storage domains. Import templates from a newly attached export domain. This procedure requires access to the Administration Portal. Importing a Template into a Data Center Click Storage Domains and select the newly attached export domain. Click the domain name to go to the details view. Click the Template Import tab and select a template. Click Import . Use the drop-down lists to select the Target Cluster and CPU Profile . Select the template to view its details, then click the Disks tab and select the Storage Domain to import the template into. Click OK . If the Import Template Conflict window appears, enter a New Name for the template, or select the Apply to all check box and enter a Suffix to add to the cloned Templates . Click OK . Click Close . The template is imported into the destination data center. This can take up to an hour, depending on your storage hardware. You can view the import progress in the Events tab. Once the importing process is complete, the templates will be visible in Compute Templates . The templates can create new virtual machines, or run existing imported virtual machines based on that template. 7.7.2. Importing a Virtual Disk from an OpenStack Image Service as a Template Virtual disks managed by an OpenStack Image Service can be imported into the Red Hat Virtualization Manager if that OpenStack Image Service has been added to the Manager as an external provider. This procedure requires access to the Administration Portal. Click Storage Domains and select the OpenStack Image Service domain. Click the storage domain name to go to the details view. Click the Images tab and select the image to import. Click Import . Note If you are importing an image from a Glance storage domain, you have the option of specifying the template name. OpenStack Glance is now deprecated. This functionality will be removed in a later release. Select the Data Center into which the virtual disk will be imported. Select the storage domain in which the virtual disk will be stored from the Domain Name drop-down list. Optionally, select a Quota to apply to the virtual disk. Select the Import as Template check box. Select the Cluster in which the virtual disk will be made available as a template. Click OK . The image is imported as a template and is displayed in the Templates tab. You can now create virtual machines based on the template.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-importing_templates
Integrating Oracle Cloud data into cost management
Integrating Oracle Cloud data into cost management Cost Management Service 1-latest Learn how to add and configure your Oracle Cloud integration Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_oracle_cloud_data_into_cost_management/index
Chapter 2. Maintenance support
Chapter 2. Maintenance support 2.1. Maintenance support for JBoss EAP XP When a new JBoss EAP XP major version is released, maintenance support for the major version begins. Maintenance support usually lasts for 12 weeks. If you use a JBoss EAP XP major version that is outside of its maintenance support length, you might experience issues as the development of security patches and bug fixes no longer apply. To avoid such issues, upgrade to the newest JBoss EAP XP major version release that is compatible with your JBoss EAP version. Additional resources For information about maintenance support, see the Red Hat JBoss Enterprise Application Platform expansion pack (JBoss EAP XP or EAP XP) Life Cycle and Support Policies located on the Red Hat Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_eap_xp_5.0_release_notes/maintenance_support
21.12. virt-customize: Customizing Virtual Machine Settings
21.12. virt-customize: Customizing Virtual Machine Settings The virt-customize command-line tool can be used to customize a virtual machine. For example, by installing packages and editing configuration files. To use virt-customize , the guest virtual machine must be offline, so you must shut it down before running the commands. Note that virt-customize modifies the guest or disk image in place without making a copy of it. If you want to preserve the existing contents of the guest virtual machine, you must snapshot, copy or clone the disk first. For more information on copying and cloning disks, see libguestfs.org . Warning Using virt-customize on live virtual machines, or concurrently with other disk editing tools can cause disk corruption. The virtual machine must be shut down before using this command. In addition, disk images should not be edited concurrently. It is recommended that you do not run virt-customize as root. To install virt-customize , run one of the following commands: # yum install /usr/bin/virt-customize or # yum install libguestfs-tools-c The following command options are available to use with virt-customize : Table 21.2. virt-customize options Command Description Example --help Displays a brief help entry about a particular command or about the virt-customize utility. For additional help, see the virt-customize man page. virt-customize --help -a [ file ] or --add [ file ] Adds the specified file , which should be a disk image from a guest virtual machine. The format of the disk image is auto-detected. To override this and force a particular format, use the --format option. virt-customize --add /dev/vms/disk.img -a [ URI ] or --add [ URI ] Adds a remote disk. The URI format is compatible with guestfish. For more information, see Section 21.4.2, "Adding Files with guestfish" . virt-customize -a rbd://example.com[:port]/pool/disk -c [ URI ] or --connect [ URI ] Connects to the given URI, if using libvirt . If omitted, then it connects via the KVM hypervisor. If you specify guest block devices directly ( virt-customize -a ), then libvirt is not used at all. virt-customize -c qemu:///system -d [ guest ] or --domain [ guest ] Adds all the disks from the specified guest virtual machine. Domain UUIDs can be used instead of domain names. virt-customize --domain 90df2f3f-8857-5ba9-2714-7d95907b1c9e -n or --dry-run Performs a read-only "dry run" customize operation on the guest virtual machine. This runs the customize operation, but throws away any changes to the disk at the end. virt-customize -n --format [ raw | qcow2 | auto ] The default for the -a option is to auto-detect the format of the disk image. Using this forces the disk format for -a options that follow on the command line. Using --format auto switches back to auto-detection for subsequent -a options (see the -a command above). virt-customize --format raw -a disk.img forces raw format (no auto-detection) for disk.img, but virt-customize --format raw -a disk.img --format auto -a another.img forces raw format (no auto-detection) for disk.img and reverts to auto-detection for another.img . If you have untrusted raw-format guest disk images, you should use this option to specify the disk format. This avoids a possible security problem with malicious guests. -m [ MB ] or --memsize [ MB ] Changes the amount of memory allocated to --run scripts. If --run scripts or the --install option cause out of memory issues, increase the memory allocation. virt-customize --memsize 1024 --network or --no-network Enables or disables network access from the guest during installation. The default is enabled. Use --no-network to disable access. This command does not affect guest access to the network after booting. For more information, see libguestfs documentation . virt-customize -a http://[user@]example.com[:port]/disk.img -q or --quiet Prevents the printing of log messages. virt-customize -q -smp [ N ] Enables N virtual CPUs that can be used by --install scripts. N must be 2 or more. virt-customize -smp 4 -v or --verbose Enables verbose messages for debugging purposes. virt-customize --verbose -V or --version Displays the virt-customize version number and exits. virt-customize --V -x Enables tracing of libguestfs API calls. virt-customize -x The virt-customize command uses customization options to configure how the guest is customized. The following provides information about the --selinux-relabel customization option. The --selinux-relabel customization option relabels files in the guest so that they have the correct SELinux label. This option tries to relabel files immediately. If unsuccessful, /.autorelabel is activated on the image. This schedules the relabel operation for the time the image boots. Note This option should only be used for guests that support SELinux. The following example installs the GIMP and Inkscape packages on the guest and and ensures that the SELinux labels will be correct the time the guest boots. Example 21.1. Using virt-customize to install packages on a guest virt-customize -a disk.img --install gimp,inkscape --selinux-relabel For more information, including customization options, see libguestfs.org .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-using_virt_customize
Chapter 5. Configuration options
Chapter 5. Configuration options This chapter lists the available configuration options for AMQ OpenWire JMS. JMS configuration options are set as query parameters on the connection URI. For more information, see Section 4.3, "Connection URIs" . 5.1. JMS options jms.username The user name the client uses to authenticate the connection. jms.password The password the client uses to authenticate the connection. jms.clientID The client ID that the client applies to the connection. jms.closeTimeout The timeout in milliseconds for JMS close operations. The default is 15000 (15 seconds). jms.connectResponseTimeout The timeout in milliseconds for JMS connect operations. The default is 0, meaning no timeout. jms.sendTimeout The timeout in milliseconds for JMS send operations. The default is 0, meaning no timeout. jms.checkForDuplicates If enabled, ignore duplicate messages. It is enabled by default. jms.disableTimeStampsByDefault If enabled, do not timestamp messages. It is disabled by default. jms.useAsyncSend If enabled, send messages without waiting for acknowledgment. It is disabled by default. jms.alwaysSyncSend If enabled, send waits for acknowledgment in all delivery modes. It is disabled by default. jms.useCompression If enabled, compress message bodies. It is disabled by default. jms.useRetroactiveConsumer If enabled, non-durable subscribers can receive messages that were published before the subscription started. It is disabled by default. Prefetch policy options Prefetch policy determines how many messages each MessageConsumer fetches from the remote peer and holds in a local "prefetch" buffer. jms.prefetchPolicy.queuePrefetch The number of messages to prefetch for queues. The default is 1000. jms.prefetchPolicy.queueBrowserPrefetch The number of messages to prefetch for queue browsers. The default is 500. jms.prefetchPolicy.topicPrefetch The number of messages to prefetch for non-durable topics. The default is 32766. jms.prefetchPolicy.durableTopicPrefetch The number of messages to prefetch for durable topics. The default is 100. jms.prefetchPolicy.all This can be used to set all prefetch values at once. The value of prefetch can affect the distribution of messages to multiple consumers on a queue. A higher value can result in larger batches sent at once to each consumer. To achieve more even round-robin distribution when consumers operate at different rates, use a lower value. Redelivery policy options Redelivery policy controls how redelivered messages are handled on the client. jms.redeliveryPolicy.maximumRedeliveries The number of times redelivery is attempted before the message is sent to the dead letter queue. The default is 6. -1 means no limit. jms.redeliveryPolicy.redeliveryDelay The time in milliseconds between redelivery attempts. This is used if initialRedeliveryDelay is 0. The default is 1000 (1 second). jms.redeliveryPolicy.initialRedeliveryDelay The time in milliseconds before the first redelivery attempt. The default is 1000 (1 second). jms.redeliveryPolicy.maximumRedeliveryDelay The maximum time in milliseconds between redelivery attempts. This is used if useExponentialBackOff is enabled. The default is 1000 (1 second). -1 means no limit. jms.redeliveryPolicy.useExponentialBackOff If enabled, increase redelivery delay with each subsequent attempt. It is disabled by default. jms.redeliveryPolicy.backOffMultiplier The multiplier for increasing the redelivery delay. The default is 5. jms.redeliveryPolicy.useCollisionAvoidance If enabled, adjust the redelivery delay slightly up or down to avoid collisions. It is disabled by default. jms.redeliveryPolicy.collisionAvoidanceFactor The multiplier for adjusting the redelivery delay. The default is 0.15. nonBlockingRedelivery If enabled, allow out of order redelivery, to avoid head-of-line blocking. It is disabled by default. 5.2. TCP options closeAsync If enabled, close the socket in a separate thread. It is enabled by default. connectionTimeout The timeout in milliseconds for TCP connect operations. The default is 30000 (30 seconds). 0 means no timeout. dynamicManagement If enabled, allow JMX management of the transport logger. It is disabled by default. ioBufferSize The I/O buffer size in bytes. The default is 8192 (8 KiB). jmxPort The port for JMX management. The default is 1099. keepAlive If enabled, use TCP keepalive. This is distinct from the keepalive mechanism based on KeepAliveInfo messages. It is disabled by default. logWriterName The name of the org.apache.activemq.transport.LogWriter implementation. Name-to-class mappings are stored in the resources/META-INF/services/org/apache/activemq/transport/logwriters directory. The default is default . soLinger The socket linger option. The default is 0. soTimeout The timeout in milliseconds for socket read operations. The default is 0, meaning no timeout. soWriteTimeout The timeout in milliseconds for socket write operations. The default is 0, meaning no timeout. startLogging If enabled, and the trace option is also enabled, log transport startup events. It is enabled by default. tcpNoDelay If enabled, do not delay and buffer TCP sends. It is disabled by default. threadName If set, the name assigned to the transport thread. The remote address is appended to the name. It is unset by default. trace If enabled, log transport events to log4j.logger.org.apache.activemq.transport.TransportLogger . It is disabled by default. useInactivityMonitor If enabled, time out connections that fail to send KeepAliveInfo messages. It is enabled by default. useKeepAlive If enabled, periodically send KeepAliveInfo messages to prevent the connection from timing out. It is enabled by default. useLocalHost If enabled, make local connections using the name localhost instead of the current hostname. It is disabled by default. 5.3. SSL/TLS options socket.keyStore The path to the SSL/TLS key store. A key store is required for mutual SSL/TLS authentication. If unset, the value of the javax.net.ssl.keyStore system property is used. socket.keyStorePassword The password for the SSL/TLS key store. If unset, the value of the javax.net.ssl.keyStorePassword system property is used. socket.keyStoreType The string name of the trust store type. The default is the value of java.security.KeyStore.getDefaultType() . socket.trustStore The path to the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStore system property is used. socket.trustStorePassword The password for the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStorePassword system property is used. socket.trustStoreType The string name of the trust store type. The default is the value of java.security.KeyStore.getDefaultType() . socket.enabledCipherSuites A comma-separated list of cipher suites to enable. If unset, the JVM default ciphers are used. socket.enabledProtocols A comma-separated list of SSL/TLS protocols to enable. If unset, the JVM default protocols are used. 5.4. OpenWire options wireFormat.cacheEnabled If enabled, avoid excessive marshalling and bandwidth consumption by caching frequently used values. It is enabled by default. wireFormat.cacheSize The number of cache entries. The cache is per connection. The default is 1024. wireFormat.maxInactivityDuration The maximum time in milliseconds before a connection with no activity is considered dead. The default is 30000 (30 seconds). wireFormat.maxInactivityDurationInitalDelay The initial delay in milliseconds before inactivity checking begins. Note that Inital is misspelled. The default is 10000 (10 seconds). wireFormat.maxFrameSize The maximum frame size in bytes. The default is the value of java.lang.Long.MAX_VALUE . wireFormat.sizePrefixDisabled If set true, do not prefix packets with their size. It is false by default. wireFormat.stackTraceEnabled If enabled, send stack traces from exceptions on the server to the client. It is enabled by default. wireFormat.tcpNoDelayEnabled If enabled, tell the server to activate TCP_NODELAY . It is enabled by default. wireFormat.tightEncodingEnabled If enabled, optimize for smaller encoding on the wire. This increases CPU usage. It is enabled by default. 5.5. Failover options maxReconnectAttempts The number of reconnect attempts allowed before reporting the connection as failed. The default is -1, meaning no limit. 0 disables reconnect. maxReconnectDelay The maximum time in milliseconds between the second and subsequent reconnect attempts. The default is 30000 (30 seconds). randomize If enabled, randomly select one of the failover endpoints. It is enabled by default. reconnectDelayExponent The multiplier for increasing the reconnect delay backoff. The default is 2.0. useExponentialBackOff If enabled, increase the reconnect delay with each subsequent attempt. It is enabled by default. timeout The timeout in milliseconds for send operations waiting for reconnect. The default is -1, meaning no timeout.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_openwire_jms_client/configuration_options
Chapter 2. Configuring an IBM Cloud account
Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud account. 2.1. Prerequisites You have an IBM Cloud account with a subscription. You cannot install OpenShift Container Platform on a free or trial IBM Cloud account. 2.2. Quotas and limits on IBM Cloud VPC The OpenShift Container Platform cluster uses a number of IBM Cloud VPC components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud account. For a comprehensive list of the default IBM Cloud VPC quotas and service limits, see IBM Cloud's documentation for Quotas and service limits . Virtual Private Cloud (VPC) Each OpenShift Container Platform cluster creates its own VPC. The default quota of VPCs per region is 10 and will allow 10 clusters. To have more than 10 clusters in a single region, you must increase this quota. Application load balancer By default, each cluster creates three application load balancers (ALBs): Internal load balancer for the master API server External load balancer for the master API server Load balancer for the router You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Cloud VPC. Floating IP address By default, the installation program distributes control plane and compute machines across all availability zones within a region to provision the cluster in a highly available configuration. In each availability zone, a public gateway is created and requires a separate floating IP address. The default quota for a floating IP address is 20 addresses per availability zone. The default cluster configuration yields three floating IP addresses: Two floating IP addresses in the us-east-1 primary zone. The IP address associated with the bootstrap node is removed after installation. One floating IP address in the us-east-2 secondary zone. One floating IP address in the us-east-3 secondary zone. IBM Cloud VPC can support up to 19 clusters per region in an account. If you plan to have more than 19 default clusters, you must increase this quota. Virtual Server Instances (VSI) By default, a cluster creates VSIs using bx2-4x16 profiles, which includes the following resources by default: 4 vCPUs 16 GB RAM The following nodes are created: One bx2-4x16 bootstrap machine, which is removed after the installation is complete Three bx2-4x16 control plane nodes Three bx2-4x16 compute nodes For more information, see IBM Cloud's documentation on supported profiles . Table 2.1. VSI component quotas and limits VSI component Default IBM Cloud VPC quota Default cluster configuration Maximum number of clusters vCPU 200 vCPUs per region 28 vCPUs, or 24 vCPUs after bootstrap removal 8 per region RAM 1600 GB per region 112 GB, or 96 GB after bootstrap removal 16 per region Storage 18 TB per region 1050 GB, or 900 GB after bootstrap removal 19 per region If you plan to exceed the resources stated in the table, you must increase your IBM Cloud account quota. Block Storage Volumes For each VPC machine, a block storage device is attached for its boot volume. The default cluster configuration creates seven VPC machines, resulting in seven block storage volumes. Additional Kubernetes persistent volume claims (PVCs) of the IBM Cloud VPC storage class create additional block storage volumes. The default quota of VPC block storage volumes are 300 per region. To have more than 300 volumes, you must increase this quota. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud DNS Services (DNS Services) 2.3.1. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note IBM Cloud VPC does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud CLI . You have an existing domain and registrar. For more information, see the IBM documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard 1 1 At a minimum, a Standard plan is required for CIS to manage the cluster subdomain and its DNS records. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_name> 1 1 The instance cloud resource name. Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see IBM Cloud's documentation . 2.3.2. Using IBM Cloud DNS Services for DNS resolution The installation program uses IBM Cloud DNS Services to configure cluster DNS resolution and provide name lookup for a private cluster. You configure DNS resolution by creating a DNS services instance for the cluster, and then adding a DNS zone to the DNS Services instance. Ensure that the zone is authoritative for the domain. You can do this using a root domain or subdomain. Note IBM Cloud VPC does not support IPv6, so dual stack or IPv6 environments are not possible. Prerequisites You have installed the IBM Cloud CLI . You have an existing domain and registrar. For more information, see the IBM documentation . Procedure Create a DNS Services instance to use with your cluster: Install the DNS Services plugin by running the following command: USD ibmcloud plugin install cloud-dns-services Create the DNS Services instance by running the following command: USD ibmcloud dns instance-create <instance-name> standard-dns 1 1 At a minimum, a Standard plan is required for DNS Services to manage the cluster subdomain and its DNS records. Create a DNS zone for the DNS Services instance: Set the target operating DNS Services instance by running the following command: USD ibmcloud dns instance-target <instance-name> Add the DNS zone to the DNS Services instance by running the following command: USD ibmcloud dns zone-create <zone-name> 1 1 The fully qualified zone name. You can use either the root domain or subdomain value as the zone name, depending on which you plan to configure. A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Record the name of the DNS zone you have created. As part of the installation process, you must update the install-config.yaml file before deploying the cluster. Use the name of the DNS zone as the value for the baseDomain parameter. Note You do not have to manage permitted networks or configure an "A" DNS resource record. As required, the installation program configures these resources automatically. 2.4. IBM Cloud VPC IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud IAM overview, see the IBM Cloud documentation . 2.4.1. Required access policies You must assign the required access policies to your IBM Cloud account. Table 2.2. Required access policies Service type Service Access policy scope Platform access Service access Account management IAM Identity Service All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Service ID creator Account management [2] Identity and Access Management All resources Editor, Operator, Viewer, Administrator Account management Resource group only All resource groups in the account Administrator IAM services Cloud Object Storage All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager, Content Reader, Object Reader, Object Writer IAM services Internet Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services DNS Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services VPC Infrastructure Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager The policy access scope should be set based on how granular you want to assign access. The scope can be set to All resources or Resources based on selected attributes . Optional: This access policy is only required if you want the installation program to create a resource group. For more information about resource groups, see the IBM documentation . 2.4.2. Access policy assignment In IBM Cloud VPC IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.4.3. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud account. Prerequisites You have assigned the required access policies to your IBM Cloud account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud VPC API keys, see Understanding API keys . 2.5. Supported IBM Cloud VPC regions You can deploy an OpenShift Container Platform cluster to the following regions: au-syd (Sydney, Australia) br-sao (Sao Paulo, Brazil) ca-tor (Toronto, Canada) eu-de (Frankfurt, Germany) eu-gb (London, United Kingdom) jp-osa (Osaka, Japan) jp-tok (Tokyo, Japan) us-east (Washington DC, United States) us-south (Dallas, United States) 2.6. steps Configuring IAM for IBM Cloud VPC
[ "ibmcloud plugin install cis", "ibmcloud cis instance-create <instance_name> standard 1", "ibmcloud cis instance-set <instance_name> 1", "ibmcloud cis domain-add <domain_name> 1", "ibmcloud plugin install cloud-dns-services", "ibmcloud dns instance-create <instance-name> standard-dns 1", "ibmcloud dns instance-target <instance-name>", "ibmcloud dns zone-create <zone-name> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_cloud_vpc/installing-ibm-cloud-account
4.5. One-Time Passwords
4.5. One-Time Passwords One-time password (OTP) is a password that is valid for only one authentication session; it becomes invalid after use. Unlike traditional static passwords that stay the same for a longer period of time, OTPs keep changing. OTPs are used as part of two-factor authentication: the first step requires the user to authenticate with a traditional static password, and the second step prompts for an OTP issued by a recognized authentication token. Authentication using an OTP combined with a static password is considered safer than authentication using a static password alone. Because an OTP can only be used for successful authentication once, even if a potential intruder intercepts the OTP during login, the intercepted OTP will already be invalid by that point. One-Time Passwords in Red Hat Enterprise Linux Red Hat Identity Management supports OTP authentication for IdM users. For more information, see the One-Time Passwords section in the Linux Domain Identity, Authentication, and Policy Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/otp
Chapter 4. Installing a cluster
Chapter 4. Installing a cluster 4.1. Cleaning up installations In case of an earlier failed deployment, remove the artifacts from the failed attempt before trying to deploy OpenShift Container Platform again. Procedure Power off all bare-metal nodes before installing the OpenShift Container Platform cluster by using the following command: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove all old bootstrap resources if any remain from an earlier deployment attempt by using the following script: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done Delete the artifacts that the earlier installation generated by using the following command: USD cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \ .openshift_install.log .openshift_install_state.json Re-create the OpenShift Container Platform manifests by using the following command: USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests 4.2. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 4.3. Following the progress of the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log 4.4. Verifying static IP address configuration If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node's network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address. Note The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing. Verify the network configuration is working properly. Procedure Check the network interface configuration on the node. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly. 4.5. Additional resources Understanding update channels and releases
[ "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json .openshift_install.log .openshift_install_state.json", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installing-a-cluster