title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 20. Granting sudo access to an IdM user on an IdM client
Chapter 20. Granting sudo access to an IdM user on an IdM client Learn more about granting sudo access to users in Identity Management. 20.1. Sudo access on an IdM client System administrators can grant sudo access to allow non-root users to execute administrative commands that are normally reserved for the root user. Consequently, when users need to perform an administrative command normally reserved for the root user, they precede that command with sudo . After entering their password, the command is executed as if they were the root user. To execute a sudo command as another user or group, such as a database service account, you can configure a RunAs alias for a sudo rule. If a Red Hat Enterprise Linux (RHEL) 8 host is enrolled as an Identity Management (IdM) client, you can specify sudo rules defining which IdM users can perform which commands on the host in the following ways: Locally in the /etc/sudoers file Centrally in IdM You can create a central sudo rule for an IdM client using the command line (CLI) and the IdM Web UI. You can also configure password-less authentication for sudo using the Generic Security Service Application Programming Interface (GSSAPI), the native way for UNIX-based operating systems to access and authenticate Kerberos services. You can use the pam_sss_gss.so Pluggable Authentication Module (PAM) to invoke GSSAPI authentication via the SSSD service, allowing users to authenticate to the sudo command with a valid Kerberos ticket. Additional resources Managing sudo access 20.2. Granting sudo access to an IdM user on an IdM client using the CLI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. For example, complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named idm_user_reboot : Add the /usr/sbin/reboot command to the idm_user_reboot rule: Apply the idm_user_reboot rule to the IdM idmclient host: Add the idm_user account to the idm_user_reboot rule: Optional: Define the validity of the idm_user_reboot rule: To define the time at which a sudo rule starts to be valid, use the ipa sudorule-mod sudo_rule_name command with the --setattr sudonotbefore= DATE option. The DATE value must follow the yyyymmddHHMMSSZ format, with seconds specified explicitly. For example, to set the start of the validity of the idm_user_reboot rule to 31 December 2025 12:34:00, enter: To define the time at which a sudo rule stops being valid, use the --setattr sudonotafter=DATE option. For example, to set the end of the idm_user_reboot rule validity to 31 December 2026 12:34:00, enter: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo . Enter the password for idm_user when prompted: 20.3. Granting sudo access to an AD user on an IdM client using the CLI Identity Management (IdM) system administrators can use IdM user groups to set access permissions, host-based access control, sudo rules, and other controls on IdM users. IdM user groups grant and restrict access to IdM domain resources. You can add both Active Directory (AD) users and AD groups to IdM user groups. To do that: Add the AD users or groups to a non-POSIX external IdM group. Add the non-POSIX external IdM group to an IdM POSIX group. You can then manage the privileges of the AD users by managing the privileges of the POSIX group. For example, you can grant sudo access for a specific command to an IdM POSIX user group on a specific IdM host. Note It is also possible to add AD user groups as members to IdM external groups. This might make it easier to define policies for Windows users, by keeping the user and group management within the single AD realm. Important Do not use ID overrides of AD users for SUDO rules in IdM. ID overrides of AD users represent only POSIX attributes of AD users, not AD users themselves. You can add ID overrides as group members. However, you can only use this functionality to manage IdM resources in the IdM API. The possibility to add ID overrides as group members is not extended to POSIX environments and you therefore cannot use it for membership in sudo or host-based access control (HBAC) rules. Follow this procedure to create the ad_users_reboot sudo rule to grant the [email protected] AD user the permission to run the /usr/sbin/reboot command on the idmclient IdM host, which is normally reserved for the root user. [email protected] is a member of the ad_users_external non-POSIX group, which is, in turn, a member of the ad_users POSIX group. Prerequisites You have obtained the IdM admin Kerberos ticket-granting ticket (TGT). A cross-forest trust exists between the IdM domain and the ad-domain.com AD domain. No local administrator account is present on the idmclient host: the administrator user is not listed in the local /etc/passwd file. Procedure Create the ad_users group that contains the ad_users_external group with the administrator@ad-domain member: Optional: Create or select a corresponding group in the AD domain to use to manage AD users in the IdM realm. You can use multiple AD groups and add them to different groups on the IdM side. Create the ad_users_external group and indicate that it contains members from outside the IdM domain by adding the --external option: Note Ensure that the external group that you specify here is an AD security group with a global or universal group scope as defined in the Active Directory security groups document. For example, the Domain users or Domain admins AD security groups cannot be used because their group scope is domain local . Create the ad_users group: Add the [email protected] AD user to ad_users_external as an external member: The AD user must be identified by a fully-qualified name, such as DOMAIN\user_name or user_name@DOMAIN . The AD identity is then mapped to the AD SID for the user. The same applies to adding AD groups. Add ad_users_external to ad_users as a member: Grant the members of ad_users the permission to run /usr/sbin/reboot on the idmclient host: Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named ad_users_reboot : Add the /usr/sbin/reboot command to the ad_users_reboot rule: Apply the ad_users_reboot rule to the IdM idmclient host: Add the ad_users group to the ad_users_reboot rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as [email protected] , an indirect member of the ad_users group: Optional: Display the sudo commands that [email protected] is allowed to execute: Reboot the machine using sudo . Enter the password for [email protected] when prompted: Additional resources Active Directory users and Identity Management groups Include users and groups from a trusted Active Directory domain into SUDO rules 20.4. Granting sudo access to an IdM user on an IdM client using the IdM Web UI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. Complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the command line, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Add the /usr/sbin/reboot command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command you want the user to be able to perform using sudo : /usr/sbin/reboot . Figure 20.1. Adding IdM sudo command Click Add . Use the new sudo command entry to create a sudo rule to allow idm_user to reboot the idmclient machine: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: idm_user_reboot . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "idm_user_reboot" dialog box. In the Add users into sudo rule "idm_user_reboot" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "idm_user_reboot" dialog box. In the Add hosts into sudo rule "idm_user_reboot" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box. In the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box in the Available column, check the /usr/sbin/reboot checkbox, and move it to the Prospective column. Click Add to return to the idm_sudo_reboot page. Figure 20.2. Adding IdM sudo rule Click Save in the top left corner. The new rule is enabled by default. Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If the sudo rule is configured correctly, the machine reboots. 20.5. Creating a sudo rule on the CLI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule on the command line called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Create a sudo rule named run_third-party-app_report : Use the --users= <user> option to specify the RunAs user for the sudorule-add-runasuser command: The user (or group specified with the --groups=* option) can be external to IdM, such as a local service account or an Active Directory user. Do not add a % prefix for group names. Add the /opt/third-party-app/bin/report command to the run_third-party-app_report rule: Apply the run_third-party-app_report rule to the IdM idmclient host: Add the idm_user account to the run_third-party-app_report rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 20.6. Creating a sudo rule in the IdM WebUI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule in the IdM WebUI called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command: /opt/third-party-app/bin/report . Click Add . Use the new sudo command entry to create the new sudo rule: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: run_third-party-app_report . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "run_third-party-app_report" dialog box. In the Add users into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "run_third-party-app_report" dialog box. In the Add hosts into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box. In the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box in the Available column, check the /opt/third-party-app/bin/report checkbox, and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Specify the RunAs user: In the As Whom section, check the Specified Users and Groups radio button. In the RunAs Users subsection, click Add to open the Add RunAs users into sudo rule "run_third-party-app_report" dialog box. In the Add RunAs users into sudo rule "run_third-party-app_report" dialog box, enter the thirdpartyapp service account in the External box and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Click Save in the top left corner. The new rule is enabled by default. Figure 20.3. Details of the sudo rule Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 20.7. Enabling GSSAPI authentication for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. With this configuration, IdM users can authenticate to the sudo command with their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entry to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 9.2 or later: Optional: Determine if you have selected the sssd authselect profile: If the sssd authselect profile is selected, enable GSSAPI authentication: If the sssd authselect profile is not selected, select it and enable GSSAPI authentication: On RHEL 9.1 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Verification Log into the host as the idm_user account. Verify that you have a ticket-granting ticket as the idm_user account. Optional: If you do not have Kerberos credentials for the idm_user account, delete your current Kerberos credentials and request the correct ones. Reboot the machine using sudo , without specifying a password. Additional resources The GSSAPI entry in the IdM terminology listing Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI pam_sss_gss (8) and sssd.conf (5) man pages on your system 20.8. Enabling GSSAPI authentication and enforcing Kerberos authentication indicators for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. Additionally, only users who have logged in with a smart card will authenticate to those commands with their Kerberos ticket. Note You can use this procedure as a template to configure GSSAPI authentication with SSSD for other PAM-aware services, and further restrict access to only those users that have a specific authentication indicator attached to their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. You have configured smart card authentication for the idmclient host. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entries to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 9.2 or later: Determine if you have selected the sssd authselect profile: Optional: Select the sssd authselect profile: Enable GSSAPI authentication: Configure the system to authenticate only users with smart cards: On RHEL 9.1 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Open the /etc/pam.d/sudo-i PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo-i file. Save and close the /etc/pam.d/sudo-i file. Verification Log into the host as the idm_user account and authenticate with a smart card. Verify that you have a ticket-granting ticket as the smart card user. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo , without specifying a password. Additional resources SSSD options controlling GSSAPI authentication for PAM services The GSSAPI entry in the IdM terminology listing Configuring Identity Management for smart card authentication Kerberos authentication indicators Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI . pam_sss_gss (8) and sssd.conf (5) man pages on your system 20.9. SSSD options controlling GSSAPI authentication for PAM services You can use the following options for the /etc/sssd/sssd.conf configuration file to adjust the GSSAPI configuration within the SSSD service. pam_gssapi_services GSSAPI authentication with SSSD is disabled by default. You can use this option to specify a comma-separated list of PAM services that are allowed to try GSSAPI authentication using the pam_sss_gss.so PAM module. To explicitly disable GSSAPI authentication, set this option to - . pam_gssapi_indicators_map This option only applies to Identity Management (IdM) domains. Use this option to list Kerberos authentication indicators that are required to grant PAM access to a service. Pairs must be in the format <PAM_service> :_<required_authentication_indicator>_ . Valid authentication indicators are: otp for two-factor authentication radius for RADIUS authentication pkinit for PKINIT, smart card, or certificate authentication hardened for hardened passwords pam_gssapi_check_upn This option is enabled and set to true by default. If this option is enabled, the SSSD service requires that the user name matches the Kerberos credentials. If false , the pam_sss_gss.so PAM module authenticates every user that is able to obtain the required service ticket. Examples The following options enable Kerberos authentication for the sudo and sudo-i services, requires that sudo users authenticated with a one-time password, and user names must match the Kerberos principal. Because these settings are in the [pam] section, they apply to all domains: You can also set these options in individual [domain] sections to overwrite any global values in the [pam] section. The following options apply different GSSAPI settings to each domain: For the idm.example.com domain Enable GSSAPI authentication for the sudo and sudo -i services. Require certificate or smart card authentication authenticators for the sudo command. Require one-time password authentication authenticators for the sudo -i command. Enforce matching user names and Kerberos principals. For the ad.example.com domain Enable GSSAPI authentication only for the sudo service. Do not enforce matching user names and principals. Additional resources Kerberos authentication indicators 20.10. Troubleshooting GSSAPI authentication for sudo If you are unable to authenticate to the sudo service with a Kerberos ticket from IdM, use the following scenarios to troubleshoot your configuration. Prerequisites You have enabled GSSAPI authentication for the sudo service. See Enabling GSSAPI authentication for sudo on an IdM client . You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure If you see the following error, the Kerberos service might not able to resolve the correct realm for the service ticket based on the host name: In this situation, add the hostname directly to [domain_realm] section in the /etc/krb5.conf Kerberos configuration file: If you see the following error, you do not have any Kerberos credentials: In this situation, retrieve Kerberos credentials with the kinit utility or authenticate with SSSD: If you see either of the following errors in the /var/log/sssd/sssd_pam.log log file, the Kerberos credentials do not match the username of the user currently logged in: In this situation, verify that you authenticated with SSSD, or consider disabling the pam_gssapi_check_upn option in the /etc/sssd/sssd.conf file: For additional troubleshooting, you can enable debugging output for the pam_sss_gss.so PAM module. Add the debug option at the end of all pam_sss_gss.so entries in PAM files, such as /etc/pam.d/sudo and /etc/pam.d/sudo-i : Try to authenticate with the pam_sss_gss.so module and review the console output. In this example, the user did not have any Kerberos credentials. 20.11. Using an Ansible playbook to ensure sudo access for an IdM user on an IdM client In Identity Management (IdM), you can ensure sudo access to a specific command is granted to an IdM user account on a specific IdM host. Complete this procedure to ensure a sudo rule named idm_user_reboot exists. The rule grants idm_user the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You have ensured the presence of a user account for idm_user in IdM and unlocked the account by creating a password for the user . For details on adding a new IdM user using the command line, see link: Adding users using the command line . No local idm_user account exists on idmclient . The idm_user user is not listed in the /etc/passwd file on idmclient . Procedure Create an inventory file, for example inventory.file , and define ipaservers in it: Add one or more sudo commands: Create an ensure-reboot-sudocmd-is-present.yml Ansible playbook that ensures the presence of the /usr/sbin/reboot command in the IdM database of sudo commands. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudocmd/ensure-sudocmd-is-present.yml file: Run the playbook: Create a sudo rule that references the commands: Create an ensure-sudorule-for-idmuser-on-idmclient-is-present.yml Ansible playbook that uses the sudo command entry to ensure the presence of a sudo rule. The sudo rule allows idm_user to reboot the idmclient machine. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudorule/ensure-sudorule-is-present.yml file: Run the playbook: Verification Test that the sudo rule whose presence you have ensured on the IdM server works on idmclient by verifying that idm_user can reboot idmclient using sudo . Note that it can take a few minutes for the changes made on the server to take effect on the client. Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If sudo is configured correctly, the machine reboots. Additional resources See the README-sudocmd.md , README-sudocmdgroup.md , and README-sudorule.md files in the /usr/share/doc/ansible-freeipa/ directory.
[ "kinit admin", "ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot", "ipa sudorule-add idm_user_reboot --------------------------------- Added Sudo Rule \"idm_user_reboot\" --------------------------------- Rule name: idm_user_reboot Enabled: TRUE", "ipa sudorule-add-allow-command idm_user_reboot --sudocmds '/usr/sbin/reboot' Rule name: idm_user_reboot Enabled: TRUE Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-host idm_user_reboot --hosts idmclient.idm.example.com Rule name: idm_user_reboot Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-user idm_user_reboot --users idm_user Rule name: idm_user_reboot Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-mod idm_user_reboot --setattr sudonotbefore=20251231123400Z", "ipa sudorule-mod idm_user_reboot --setattr sudonotafter=20261231123400Z", "[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idm_user on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot", "[idm_user@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for idm_user:", "ipa group-add --desc='AD users external map' ad_users_external --external ------------------------------- Added group \"ad_users_external\" ------------------------------- Group name: ad_users_external Description: AD users external map", "ipa group-add --desc='AD users' ad_users ---------------------- Added group \"ad_users\" ---------------------- Group name: ad_users Description: AD users GID: 129600004", "ipa group-add-member ad_users_external --external \"[email protected]\" [member user]: [member group]: Group name: ad_users_external Description: AD users external map External member: S-1-5-21-3655990580-1375374850-1633065477-513 ------------------------- Number of members added 1 -------------------------", "ipa group-add-member ad_users --groups ad_users_external Group name: ad_users Description: AD users GID: 129600004 Member groups: ad_users_external ------------------------- Number of members added 1 -------------------------", "ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot", "ipa sudorule-add ad_users_reboot --------------------------------- Added Sudo Rule \"ad_users_reboot\" --------------------------------- Rule name: ad_users_reboot Enabled: True", "ipa sudorule-add-allow-command ad_users_reboot --sudocmds '/usr/sbin/reboot' Rule name: ad_users_reboot Enabled: True Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-host ad_users_reboot --hosts idmclient.idm.example.com Rule name: ad_users_reboot Enabled: True Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-user ad_users_reboot --groups ad_users Rule name: ad_users_reboot Enabled: TRUE User Groups: ad_users Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ssh [email protected]@ipaclient Password:", "[[email protected]@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient : (root) /usr/sbin/reboot", "[[email protected]@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for [email protected]:", "sudo /usr/sbin/reboot [sudo] password for idm_user:", "kinit admin", "ipa sudocmd-add /opt/third-party-app/bin/report ---------------------------------------------------- Added Sudo Command \"/opt/third-party-app/bin/report\" ---------------------------------------------------- Sudo Command: /opt/third-party-app/bin/report", "ipa sudorule-add run_third-party-app_report -------------------------------------------- Added Sudo Rule \"run_third-party-app_report\" -------------------------------------------- Rule name: run_third-party-app_report Enabled: TRUE", "ipa sudorule-add-runasuser run_third-party-app_report --users= thirdpartyapp Rule name: run_third-party-app_report Enabled: TRUE RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-allow-command run_third-party-app_report --sudocmds '/opt/third-party-app/bin/report' Rule name: run_third-party-app_report Enabled: TRUE Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-host run_third-party-app_report --hosts idmclient.idm.example.com Rule name: run_third-party-app_report Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-user run_third-party-app_report --users idm_user Rule name: run_third-party-app_report Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1", "[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report", "[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.", "[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report", "[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.", "[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i", "systemctl restart sssd", "authselect current Profile ID: sssd", "authselect enable-feature with-gssapi", "authselect select sssd with-gssapi", "#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth", "ssh -l [email protected] localhost [email protected]'s password:", "[idmuser@idmclient ~]USD klist Ticket cache: KCM:1366201107 Default principal: [email protected] Valid starting Expires Service principal 01/08/2021 09:11:48 01/08/2021 19:11:48 krbtgt/[email protected] renew until 01/15/2021 09:11:44", "[idm_user@idmclient ~]USD kdestroy -A [idm_user@idmclient ~]USD kinit [email protected] Password for [email protected] :", "[idm_user@idmclient ~]USD sudo /usr/sbin/reboot", "[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit, sudo-i:pkinit", "systemctl restart sssd", "authselect current Profile ID: sssd", "authselect select sssd", "authselect enable-feature with-gssapi", "authselect with-smartcard-required", "#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth", "#%PAM-1.0 auth sufficient pam_sss_gss.so auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo", "ssh -l [email protected] localhost PIN for smart_card", "[idm_user@idmclient ~]USD klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 02/15/2021 16:29:48 02/16/2021 02:29:48 krbtgt/[email protected] renew until 02/22/2021 16:29:44", "[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idmuser on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot", "[idm_user@idmclient ~]USD sudo /usr/sbin/reboot", "[pam] pam_gssapi_services = sudo , sudo-i pam_gssapi_indicators_map = sudo:otp pam_gssapi_check_upn = true", "[domain/ idm.example.com ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit , sudo-i:otp pam_gssapi_check_upn = true [domain/ ad.example.com ] pam_gssapi_services = sudo pam_gssapi_check_upn = false", "Server not found in Kerberos database", "[idm-user@idm-client ~]USD cat /etc/krb5.conf [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM server.example.com = EXAMPLE.COM", "No Kerberos credentials available", "[idm-user@idm-client ~]USD kinit [email protected] Password for [email protected] :", "User with UPN [ <UPN> ] was not found. UPN [ <UPN> ] does not match target user [ <username> ].", "[idm-user@idm-client ~]USD cat /etc/sssd/sssd.conf pam_gssapi_check_upn = false", "cat /etc/pam.d/sudo #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include system-auth account include system-auth password include system-auth session include system-auth", "cat /etc/pam.d/sudo-i #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo", "[idm-user@idm-client ~]USD sudo ls -l /etc/sssd/sssd.conf pam_sss_gss: Initializing GSSAPI authentication with SSSD pam_sss_gss: Switching euid from 0 to 1366201107 pam_sss_gss: Trying to establish security context pam_sss_gss: SSSD User name: [email protected] pam_sss_gss: User domain: idm.example.com pam_sss_gss: User principal: pam_sss_gss: Target name: [email protected] pam_sss_gss: Using ccache: KCM: pam_sss_gss: Acquiring credentials, principal name will be derived pam_sss_gss: Unable to read credentials from [KCM:] [maj:0xd0000, min:0x96c73ac3] pam_sss_gss: GSSAPI: Unspecified GSS failure. Minor code may provide more information pam_sss_gss: GSSAPI: No credentials cache found pam_sss_gss: Switching euid from 1366200907 to 0 pam_sss_gss: System error [5]: Input/output error", "[ipaservers] server.idm.example.com", "--- - name: Playbook to manage sudo command hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure sudo command is present - ipasudocmd: ipaadmin_password: \"{{ ipaadmin_password }}\" name: /usr/sbin/reboot state: present", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-reboot-sudocmd-is-present.yml", "--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure a sudorule is present granting idm_user the permission to run /usr/sbin/reboot on idmclient - ipasudorule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user_reboot description: A test sudo rule. allow_sudocmd: /usr/sbin/reboot host: idmclient.idm.example.com user: idm_user state: present", "ansible-playbook -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-sudorule-for-idmuser-on-idmclient-is-present.yml", "sudo /usr/sbin/reboot [sudo] password for idm_user:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/granting-sudo-access-to-an-IdM-user-on-an-IdM-client_using-ansible-to-install-and-manage-identity-management
Chapter 1. Customizing nodes
Chapter 1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.18.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You need to do some low-level network configuration before the systems start. You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.18.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2 This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor on the server. You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is removed from the server. Tang Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents decryption of the data unless the nodes are on a secure network where the Tang servers are accessible. Clevis is an automated decryption framework used to implement decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. In earlier versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or later. Configure disk encryption by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure, user-provisioned infrastructure, and Assisted Installer deployments For Assisted installer deployments: Each cluster can only have a single encryption method, Tang or TPM Encryption can be enabled on some or all nodes There is no Tang threshold; all servers must be valid and operational Encryption applies to the installation disks only, not to the workload disks Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase, encrypting all data written to disk, from first boot forward Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously. This enables boot disk data decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. In the case of offline provisioning, the offline server is accessed using an included advertisement, and only uses that supplied advertisement if the number of online servers do not meet the set threshold. For example, the threshold value of 2 in the following configuration can be reached by accessing two Tang servers, with the offline server available as a backup, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.18.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: "{\"payload\": \"...\", \"protected\": \"...\", \"signature\": \"...\"}" 4 threshold: 2 5 openshift: fips: true 1 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 3 Include this section if you want to use one or more Tang servers. 4 Optional: Include this field for offline provisioning. Ignition will provision the Tang server binding rather than fetching the advertisement from the server at runtime. This lets the server be unavailable at provisioning time. 5 Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible to reach the threshold value by using a single encryption mode. For example, if you set tpm2 to true and specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available. 1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure provided one device remains available. Mirroring does not support replacement of a failed disk. Reprovision the node to restore the mirror to a pristine, non-degraded state. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to offer convenient, short-hand syntax for writing and validating machine configs. For more information, see "Creating machine configs with Butane". You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the host firmware for each node. This is required on most Dell systems. Check the manual for your specific system. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang1.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang1.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command generates a thumbprint of the exchange key. No data passes to the encryption command during this step; /dev/null exists here as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Optional: For offline Tang provisioning: Obtain the advertisement from the server using the curl command. Replace http://tang2.example.com:7500 with the URL of your Tang server: USD curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws Expected output {"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"} Provide the advertisement file to Clevis for encryption: USD clevis-encrypt-tang '{"url":"http://tang2.example.com:7500","adv":"adv.jws"}' < /dev/null > /dev/null If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.18.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: "{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13 1 2 For control plane configurations, replace worker with master in both of these locations. 3 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 4 Include this section if you want to encrypt the root file system. For more details, see "About disk encryption". 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Optional: Specify the advertisement for your offline Tang server in valid JSON format. 10 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information about this topic, see "Configuring an encryption threshold". 11 Include this section if you want to mirror the boot disk. For more details, see "About disk mirroring". 12 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 13 Include this directive to enable FIPS mode on your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file. If you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configuration file in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node, for example: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 The /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 The /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices used by the software RAID device. List the file systems mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Note OpenShift Container Platform 4.18 does not support software RAIDs on the installation drive. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.18.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.18.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 1.4.5. Configuring an Intel(R) Virtual RAID on CPU (VROC) data volume Intel(R) VROC is a type of hybrid RAID, where some of the maintenance is offloaded to the hardware, but appears as software RAID to the operating system. The following procedure configures an Intel(R) VROC-enabled RAID1. Prerequisites You have a system with Intel(R) Volume Management Device (VMD) enabled. Procedure Create the Intel(R) Matrix Storage Manager (IMSM) RAID container by running the following command: USD mdadm -CR /dev/md/imsm0 -e \ imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1 1 The RAID device names. In this example, there are two devices listed. If you provide more than two device names, you must adjust the -n flag. For example, listing three devices would use the flag -n3 . Create the RAID1 storage inside the container: Create a dummy RAID0 volume in front of the real RAID1 volume by running the following command: USD mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean Create the real RAID1 array by running the following command: USD mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0 Stop both RAID0 and RAID1 member arrays and delete the dummy RAID0 array with the following commands: USD mdadm -S /dev/md/dummy \ mdadm -S /dev/md/coreos \ mdadm --kill-subarray=0 /dev/md/imsm0 Restart the RAID1 arrays by running the following command: USD mdadm -A /dev/md/coreos /dev/md/imsm0 Install RHCOS on the RAID1 device: Get the UUID of the IMSM container by running the following command: USD mdadm --detail --export /dev/md/imsm0 Install RHCOS and include the rd.md.uuid kernel argument by running the following command: USD coreos-installer install /dev/md/coreos \ --append-karg rd.md.uuid=<md_UUID> 1 ... 1 The UUID of the IMSM container. Include any additional coreos-installer arguments you need to install RHCOS. 1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.18.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography .
[ "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane", "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane", "chmod +x butane", "echo USDPATH", "butane <butane_file>", "variant: openshift version: 4.18.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-custom.bu -o ./99-worker-custom.yaml", "oc create -f 99-worker-custom.yaml", "./openshift-install create manifests --dir <installation_directory>", "cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "cd kmods-via-containers/", "sudo make install", "sudo systemctl daemon-reload", "cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "cd kvc-simple-kmod", "cat simple-kmod.conf", "KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"", "sudo make install", "sudo kmods-via-containers build simple-kmod USD(uname -r)", "sudo systemctl enable [email protected] --now", "sudo systemctl status [email protected]", "● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "dmesg | grep 'Hello world'", "[ 6420.761332] Hello world from simple_kmod.", "sudo cat /proc/simple-procfs-kmod", "simple-procfs-kmod number = 0", "sudo spkut 44", "KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "FAKEROOT=USD(mktemp -d)", "cd kmods-via-containers", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd ../kvc-simple-kmod", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree", "variant: openshift version: 4.18.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true", "butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml", "oc create -f 99-simple-kmod.yaml", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "variant: openshift version: 4.18.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: \"{\\\"payload\\\": \\\"...\\\", \\\"protected\\\": \\\"...\\\", \\\"signature\\\": \\\"...\\\"}\" 4 threshold: 2 5 openshift: fips: true", "sudo yum install clevis", "clevis-encrypt-tang '{\"url\":\"http://tang1.example.com:7500\"}' < /dev/null > /dev/null 1", "The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1", "curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws", "{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}", "clevis-encrypt-tang '{\"url\":\"http://tang2.example.com:7500\",\"adv\":\"adv.jws\"}' < /dev/null > /dev/null", "./openshift-install create manifests --dir <installation_directory> 1", "variant: openshift version: 4.18.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: \"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}\" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13", "butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml", "oc debug node/compute-1", "chroot /host", "cryptsetup status root", "/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write", "clevis luks list -d /dev/sda4 1", "1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1", "cat /proc/mdstat", "Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>", "mdadm --detail /dev/md126", "/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8", "mount | grep /dev/md", "/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)", "variant: openshift version: 4.18.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true", "variant: openshift version: 4.18.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true", "butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1", "mdadm -CR /dev/md/imsm0 -e imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1", "mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean", "mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0", "mdadm -S /dev/md/dummy mdadm -S /dev/md/coreos mdadm --kill-subarray=0 /dev/md/imsm0", "mdadm -A /dev/md/coreos /dev/md/imsm0", "mdadm --detail --export /dev/md/imsm0", "coreos-installer install /dev/md/coreos --append-karg rd.md.uuid=<md_UUID> 1", "variant: openshift version: 4.18.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installation_configuration/installing-customizing
10.5.62. Proxy
10.5.62. Proxy <Proxy *> and </Proxy> tags create a container which encloses a group of configuration directives meant to apply only to the proxy server. Many directives which are allowed within a <Directory> container may also be used within <Proxy> container.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-proxy
Chapter 6. Access management for Red Hat Quay
Chapter 6. Access management for Red Hat Quay As a Red Hat Quay user, you can create your own repositories and make them accessible to other users that are part of your instance. Alternatively, you can create an organization and associate a set of repositories directly to that organization, referred to as an organization repository . Organization repositories differ from basic repositories in that the organization is intended to set up shared repositories through groups of users. In Red Hat Quay, groups of users can be either Teams , or sets of users with the same permissions, or individual users . You can also allow access to user repositories and organization repositories by creating credentials associated with Robot Accounts. Robot Accounts make it easy for a variety of container clients, such as Docker or Podman, to access your repositories without requiring that the client have a Red Hat Quay user account. 6.1. Red Hat Quay teams overview In Red Hat Quay a team is a group of users with shared permissions, allowing for efficient management and collaboration on projects. Teams can help streamline access control and project management within organizations and repositories. They can be assigned designated permissions and help ensure that members have the appropriate level of access to their repositories based on their roles and responsibilities. 6.1.1. Creating a team by using the UI When you create a team for your organization you can select the team name, choose which repositories to make available to the team, and decide the level of access to the team. Use the following procedure to create a team for your organization repository. Prerequisites You have created an organization. Procedure On the Red Hat Quay v2 UI, click the name of an organization. On your organization's page, click Teams and membership . Click the Create new team box. In the Create team popup window, provide a name for your new team. Optional. Provide a description for your new team. Click Proceed . A new popup window appears. Optional. Add this team to a repository, and set the permissions to one of the following: None . Team members have no permission to the repository. Read . Team members can view and pull from the repository. Write . Team members can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Optional. Add a team member or robot account. To add a team member, enter the name of their Red Hat Quay account. Review and finish the information, then click Review and Finish . The new team appears under the Teams and membership page . 6.1.2. Creating a team by using the API When you create a team for your organization with the API you can select the team name, choose which repositories to make available to the team, and decide the level of access to the team. Use the following procedure to create a team for your organization repository. Prerequisites You have created an organization. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following PUT /api/v1/organization/{orgname}/team/{teamname} command to create a team for your organization: USD curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H "Authorization: Bearer <bearer_token>" --data '{"role": "creator"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name> Example output {"name": "example_team", "description": "", "can_view": true, "role": "creator", "avatar": {"name": "example_team", "hash": "dec209fd7312a2284b689d4db3135e2846f27e0f40fa126776a0ce17366bc989", "color": "#e7ba52", "kind": "team"}, "new_team": true} 6.1.3. Managing a team by using the UI After you have created a team, you can use the UI to manage team members, set repository permissions, delete the team, or view more general information about the team. 6.1.3.1. Adding users to a team by using the UI With administrative privileges to an Organization, you can add users and robot accounts to a team. When you add a user, Red Hat Quay sends an email to that user. The user remains pending until they accept the invitation. Use the following procedure to add users or robot accounts to a team. Procedure On the Red Hat Quay landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the menu kebab of the team that you want to add users or robot accounts to. Then, click Manage team members . Click Add new member . In the textbox, enter information for one of the following: A username from an account on the registry. The email address for a user account on the registry. The name of a robot account. The name must be in the form of <organization_name>+<robot_name>. Note Robot Accounts are immediately added to the team. For user accounts, an invitation to join is mailed to the user. Until the user accepts that invitation, the user remains in the INVITED TO JOIN state. After the user accepts the email invitation to join the team, they move from the INVITED TO JOIN list to the MEMBERS list for the Organization. Click Add member . 6.1.3.2. Setting a team role by using the UI After you have created a team, you can set the role of that team within the Organization. Prerequisites You have created a team. Procedure On the Red Hat Quay landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the TEAM ROLE drop-down menu, as shown in the following figure: For the selected team, choose one of the following roles: Admin . Full administrative access to the organization, including the ability to create teams, add members, and set permissions. Member . Inherits all permissions set for the team. Creator . All member permissions, plus the ability to create new repositories. 6.1.3.2.1. Managing team members and repository permissions Use the following procedure to manage team members and set repository permissions. On the Teams and membership page of your organization, you can also manage team members and set repository permissions. Click the kebab menu, and select one of the following options: Manage Team Members . On this page, you can view all members, team members, robot accounts, or users who have been invited. You can also add a new team member by clicking Add new member . Set repository permissions . On this page, you can set the repository permissions to one of the following: None . Team members have no permission to the repository. Read . Team members can view and pull from the repository. Write . Team members can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Delete . This popup windows allows you to delete the team by clicking Delete . 6.1.3.2.2. Viewing additional information about a team Use the following procedure to view general information about the team. Procedure On the Teams and membership page of your organization, you can click the one of the following options to reveal more information about teams, members, and collaborators: Team View . This menu shows all team names, the number of members, the number of repositories, and the role for each team. Members View . This menu shows all usernames of team members, the teams that they are part of, the repository permissions of the user. Collaborators View . This menu shows repository collaborators. Collaborators are users that do not belong to any team in the organization, but who have direct permissions on one or more repositories belonging to the organization. 6.1.4. Managing a team by using the Red Hat Quay API After you have created a team, you can use the API to obtain information about team permissions or team members, add, update, or delete team members (including by email), or delete an organization team. The following procedures show you how to how to manage a team using the Red Hat Quay API. 6.1.4.1. Managing team members and repository permissions by using the API Use the following procedures to add a member to a team (by direct invite or by email), or to remove a member from a team. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the PUT /api/v1/organization/{orgname}/team/{teamname}/members/{membername} command to add or invite a member to an existing team: USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>" Example output {"name": "testuser", "kind": "user", "is_robot": false, "avatar": {"name": "testuser", "hash": "d51d17303dc3271ac3266fb332d7df919bab882bbfc7199d2017a4daac8979f0", "color": "#5254a3", "kind": "user"}, "invited": false} Enter the DELETE /api/v1/organization/{orgname}/team/{teamname}/members/{membername} command to remove a member of a team: USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>" This command does not an output in the CLI. To ensure that a member has been deleted, you can enter the GET /api/v1/organization/{orgname}/team/{teamname}/members command and ensure that the member is not returned in the output. USD curl -X GET \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members" Example output {"name": "owners", "members": [{"name": "quayadmin", "kind": "user", "is_robot": false, "avatar": {"name": "quayadmin", "hash": "b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc", "color": "#17becf", "kind": "user"}, "invited": false}, {"name": "test-org+test", "kind": "user", "is_robot": true, "avatar": {"name": "test-org+test", "hash": "aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370", "color": "#8c564b", "kind": "robot"}, "invited": false}], "can_edit": true} You can enter the PUT /api/v1/organization/{orgname}/team/{teamname}/invite/{email} command to invite a user, by email address, to an existing team: USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>" You can enter the DELETE /api/v1/organization/{orgname}/team/{teamname}/invite/{email} command to delete the invite of an email address to join a team. For example: USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>" 6.1.4.2. Setting the role of a team within an organization by using the API Use the following procedure to view and set the role a team within an organization using the API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following GET /api/v1/organization/{orgname}/team/{teamname}/permissions command to return a list of repository permissions for the organization's team. Note that your team must have been added to a repository for this command to return information. USD curl -X GET \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions" Example output {"permissions": [{"repository": {"name": "api-repo", "is_public": true}, "role": "admin"}]} You can create or update a team within an organization to have a specified role of admin , member , or creator using the PUT /api/v1/organization/{orgname}/team/{teamname} command. For example: USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{ "role": "<role>" }' \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>" Example output {"name": "testteam", "description": "", "can_view": true, "role": "creator", "avatar": {"name": "testteam", "hash": "827f8c5762148d7e85402495b126e0a18b9b168170416ed04b49aae551099dc8", "color": "#ff7f0e", "kind": "team"}, "new_team": false} 6.1.4.3. Deleting a team within an organization by using the API Use the following procedure to delete a team within an organization by using the API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure You can delete a team within an organization by entering the DELETE /api/v1/organization/{orgname}/team/{teamname} command: USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>" This command does not return output in the CLI. 6.2. Creating and managing default permissions by using the UI Default permissions define permissions that should be granted automatically to a repository when it is created, in addition to the default of the repository's creator. Permissions are assigned based on the user who created the repository. Use the following procedure to create default permissions using the Red Hat Quay v2 UI. Procedure Click the name of an organization. Click Default permissions . Click Create default permissions . A toggle drawer appears. Select either Anyone or Specific user to create a default permission when a repository is created. If selecting Anyone , the following information must be provided: Applied to . Search, invite, or add a user/robot/team. Permission . Set the permission to one of Read , Write , or Admin . If selecting Specific user , the following information must be provided: Repository creator . Provide either a user or robot account. Applied to . Provide a username, robot account, or team name. Permission . Set the permission to one of Read , Write , or Admin . Click Create default permission . A confirmation box appears, returning the following alert: Successfully created default permission for creator . 6.3. Creating and managing default permissions by using the API Use the following procedures to manage default permissions using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a default permission with the POST /api/v1/organization/{orgname}/prototypes endpoint: USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" --data '{ "role": "<admin_read_or_write>", "delegate": { "name": "<username>", "kind": "user" }, "activating_user": { "name": "<robot_name>" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes Example output {"activating_user": {"name": "test-org+test", "is_robot": true, "kind": "user", "is_org_member": true, "avatar": {"name": "test-org+test", "hash": "aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370", "color": "#8c564b", "kind": "robot"}}, "delegate": {"name": "testuser", "is_robot": false, "kind": "user", "is_org_member": false, "avatar": {"name": "testuser", "hash": "f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a", "color": "#6b6ecf", "kind": "user"}}, "role": "admin", "id": "977dc2bc-bc75-411d-82b3-604e5b79a493"} Enter the following command to update a default permission using the PUT /api/v1/organization/{orgname}/prototypes/{prototypeid} endpoint, for example, if you want to change the permission type. You must include the ID that was returned when you created the policy. USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "role": "write" }' \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid> Example output {"activating_user": {"name": "test-org+test", "is_robot": true, "kind": "user", "is_org_member": true, "avatar": {"name": "test-org+test", "hash": "aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370", "color": "#8c564b", "kind": "robot"}}, "delegate": {"name": "testuser", "is_robot": false, "kind": "user", "is_org_member": false, "avatar": {"name": "testuser", "hash": "f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a", "color": "#6b6ecf", "kind": "user"}}, "role": "write", "id": "977dc2bc-bc75-411d-82b3-604e5b79a493"} You can delete the permission by entering the DELETE /api/v1/organization/{orgname}/prototypes/{prototypeid} command: curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id> This command does not return an output. Instead, you can obtain a list of all permissions by entering the GET /api/v1/organization/{orgname}/prototypes command: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes Example output {"prototypes": []} 6.4. Adjusting access settings for a repository by using the UI Use the following procedure to adjust access settings for a user or robot account for a repository using the v2 UI. Prerequisites You have created a user account or robot account. Procedure Log into Red Hat Quay. On the v2 UI, click Repositories . Click the name of a repository, for example, quayadmin/busybox . Click the Settings tab. Optional. Click User and robot permissions . You can adjust the settings for a user or robot account by clicking the dropdown menu option under Permissions . You can change the settings to Read , Write , or Admin . Read . The User or Robot Account can view and pull from the repository. Write . The User or Robot Account can read (pull) from and write (push) to the repository. Admin . The User or Robot account has access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. 6.5. Adjusting access settings for a repository by using the API Use the following procedure to adjust access settings for a user or robot account for a repository by using the API. Prerequisites You have created a user account or robot account. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following PUT /api/v1/repository/{repository}/permissions/user/{username} command to change the permissions of a user: USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{"role": "admin"}' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username> Example output {"role": "admin", "name": "quayadmin+test", "is_robot": true, "avatar": {"name": "quayadmin+test", "hash": "ca9afae0a9d3ca322fc8a7a866e8476dd6c98de543decd186ae090e420a88feb", "color": "#8c564b", "kind": "robot"}} To delete the current permission, you can enter the DELETE /api/v1/repository/{repository}/permissions/user/{username} command: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username> This command does not return any output in the CLI. Instead, you can check that the permissions were deleted by entering the GET /api/v1/repository/{repository}/permissions/user/ command: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/ Example output {"message":"User does not have permission for repo."}
[ "curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H \"Authorization: Bearer <bearer_token>\" --data '{\"role\": \"creator\"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>", "{\"name\": \"example_team\", \"description\": \"\", \"can_view\": true, \"role\": \"creator\", \"avatar\": {\"name\": \"example_team\", \"hash\": \"dec209fd7312a2284b689d4db3135e2846f27e0f40fa126776a0ce17366bc989\", \"color\": \"#e7ba52\", \"kind\": \"team\"}, \"new_team\": true}", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"", "{\"name\": \"testuser\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"d51d17303dc3271ac3266fb332d7df919bab882bbfc7199d2017a4daac8979f0\", \"color\": \"#5254a3\", \"kind\": \"user\"}, \"invited\": false}", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"", "curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members\"", "{\"name\": \"owners\", \"members\": [{\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}, \"invited\": false}, {\"name\": \"test-org+test\", \"kind\": \"user\", \"is_robot\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}, \"invited\": false}], \"can_edit\": true}", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"", "curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions\"", "{\"permissions\": [{\"repository\": {\"name\": \"api-repo\", \"is_public\": true}, \"role\": \"admin\"}]}", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"role\": \"<role>\" }' \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"", "{\"name\": \"testteam\", \"description\": \"\", \"can_view\": true, \"role\": \"creator\", \"avatar\": {\"name\": \"testteam\", \"hash\": \"827f8c5762148d7e85402495b126e0a18b9b168170416ed04b49aae551099dc8\", \"color\": \"#ff7f0e\", \"kind\": \"team\"}, \"new_team\": false}", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"<admin_read_or_write>\", \"delegate\": { \"name\": \"<username>\", \"kind\": \"user\" }, \"activating_user\": { \"name\": \"<robot_name>\" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes", "{\"activating_user\": {\"name\": \"test-org+test\", \"is_robot\": true, \"kind\": \"user\", \"is_org_member\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}, \"delegate\": {\"name\": \"testuser\", \"is_robot\": false, \"kind\": \"user\", \"is_org_member\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}}, \"role\": \"admin\", \"id\": \"977dc2bc-bc75-411d-82b3-604e5b79a493\"}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"write\" }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid>", "{\"activating_user\": {\"name\": \"test-org+test\", \"is_robot\": true, \"kind\": \"user\", \"is_org_member\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}, \"delegate\": {\"name\": \"testuser\", \"is_robot\": false, \"kind\": \"user\", \"is_org_member\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}}, \"role\": \"write\", \"id\": \"977dc2bc-bc75-411d-82b3-604e5b79a493\"}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes", "{\"prototypes\": []}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"admin\"}' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>", "{\"role\": \"admin\", \"name\": \"quayadmin+test\", \"is_robot\": true, \"avatar\": {\"name\": \"quayadmin+test\", \"hash\": \"ca9afae0a9d3ca322fc8a7a866e8476dd6c98de543decd186ae090e420a88feb\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/", "{\"message\":\"User does not have permission for repo.\"}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/use_red_hat_quay/use-quay-manage-repo
function::task_ns_uid
function::task_ns_uid Name function::task_ns_uid - The user identifier of the task Synopsis Arguments task task_struct pointer Description This function returns the user id of the given task.
[ "task_ns_uid:long(task:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-ns-uid
Chapter 4. Container images with LLVM Toolset on RHEL 8
Chapter 4. Container images with LLVM Toolset on RHEL 8 On RHEL 8, you can build your own LLVM Toolset container images on top of Red Hat Universal Base Images (UBI) containers using Containerfiles. 4.1. Creating a container image of LLVM Toolset on RHEL 8 On RHEL 8, LLVM Toolset packages are part of the Red Hat Universal Base Images (UBIs) repositories. To keep the container image size small, install only individual packages instead of the entire LLVM Toolset. Prerequisites An existing Containerfile. For information on creating Containerfiles, see the Dockerfile reference page. Procedure Visit the Red Hat Container Catalog . Select a UBI. Click Get this image and follow the instructions. To create a container image containing LLVM Toolset, add the following lines to your Containerfile: To create a container image containing an individual package only, add the following lines to your Containerfile: Replace < package-name > with the name of the package you want to install. 4.2. Additional resources For more information on Red Hat UBI images, see Working with Container Images . For more information on Red Hat UBI repositories, see Universal Base Images (UBI): Images, repositories, packages, and source code .
[ "FROM registry.access.redhat.com/ubi8/ubi: latest RUN yum module install -y llvm-toolset", "RUN yum install -y < package-name >" ]
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_16.0.6_toolset/assembly_container-images-with-comp-toolset_using-llvm-toolset
Chapter 27. Working with Webhooks
Chapter 27. Working with Webhooks A Webhook enables you to execute specified commands between applications over the web. Automation controller currently provides webhook integration with GitHub and GitLab. Set up a webhook using the following services: Setting up a GitHub webhook Setting up a GitLab webhook Viewing the payload output The webhook post-status-back functionality for GitHub and GitLab is designed to work only under certain CI events. Receiving another kind of event results in messages such as the following in the service log: awx.main.models.mixins Webhook event did not have a status API endpoint associated, skipping. 27.1. Setting up a GitHub webhook Automation controller has the ability to run jobs based on a triggered webhook event coming in. Job status information (pending, error, success) can be sent back only for pull request events. If you do not need automation controller to post job statuses back to the webhook service, go directly to step 3. Procedure Generate a Personal Access Token (PAT) for use with automation controller: In the profile settings of your GitHub account, select Settings . From the navigation panel, select <> Developer Settings . On the Developer Settings page, select Personal access tokens . Select Tokens(classic) From the Personal access tokens screen, click Generate a personal access token . When prompted, enter your GitHub account password to continue. In the Note field, enter a brief description about what this PAT is used for. In the Select scopes fields, check the boxes to repo:status , repo_deployment , and public_repo . The automation webhook only needs repository scope access, with the exception of invites. For more information, see Scopes for OAuth apps documentation . Click Generate token . Important When the token is generated, ensure that you copy the PAT, as you need it in step 2. You cannot access this token again in GitHub. Use the PAT to optionally create a GitHub credential: Go to your instance and create a new credential for the GitHub PAT, using the generated token. Make note of the name of this credential, as you use it in the job template that posts back to GitHub. Go to the job template with which you want to enable webhooks, and select the webhook service and credential you created in the preceding step. Click Save . Your job template is set up to post back to GitHub. Go to a GitHub repository where you want to configure webhooks and select Settings . From the navigation panel, select Webhooks Add webhook . To complete the Add webhook page, you must check the Enable Webhook option in a job template or workflow job template. For more information, see step 3 in both Creating a job template and Creating a workflow job template . Complete the following fields: Payload URL : Copy the contents of the Webhook URL from the job template and paste it here. The results are sent to this address from GitHub. Content type : Set it to application/json . Secret : Copy the contents of the Webhook Key from the job template and paste it here. Which events would you like to trigger this webhook? : Select the types of events you want to trigger a webhook. Any such event will trigger the job or workflow. To have the job status (pending, error, success) sent back to GitHub, you must select Pull requests in the Let me select individual events section. Active : Leave this checked. Click Add webhook . When your webhook is configured, it is displayed in the list of webhooks active for your repository, along with the ability to edit or delete it. Click a webhook, to go to the Manage webhook screen. Scroll to view the delivery attempts made to your webhook and whether they succeeded or failed. Additional resources For more information, see the Webhooks documentation . 27.2. Setting up a GitLab webhook Automation controller has the ability to run jobs based on a triggered webhook event coming in. Job status information (pending, error, success) can be sent back only for pull request events. If automation controller is not required to post job statuses back to the webhook service, go directly to step 3. Procedure Generate a Personal Access Token (PAT) for use with automation controller: From the navigation panel in GitLab, select your avatar and Edit profile . From the navigation panel, select Access tokens . Complete the following fields: Token name : Enter a brief description about what this PAT is used for. Expiration date : Skip this field unless you want to set an expiration date for your webhook. Select scopes : Select those that are applicable to your integration. For automation controller, api is the only selection necessary. Click Create personal access token . Important When the token is generated, ensure that you copy the PAT, as you need it in step 2. You cannot access this token again in GitLab. Use the PAT to optionally create a GitLab credential: Go to your instance, and create a new credential for the GitLab PAT, using the generated token. Make note of the name of this credential, as you use it in the job template that posts back to GitLab. Go to the job template with which you want to enable webhooks, and select the webhook service and credential you created in the preceding step. Click Save . Your job template is set up to post back to GitLab. Go to a GitLab repository where you want to configure webhooks. From the navigation panel, select Settings Integrations . To complete the Add webhook page, you must check the Enable Webhook option in a job template or workflow job template. For more information, see step 3 in both Creating a job template and Creating a workflow job template . Complete the following fields: URL : Copy the contents of the Webhook URL from the job template and paste it here. The results are sent to this address from GitLab. Secret Token : Copy the contents of the Webhook Key from the job template and paste it here. Trigger : Select the types of events you want to trigger a webhook. Any such event will trigger the job or workflow. To have job status (pending, error, success) sent back to GitLab, you must select Merge request events in the Trigger section. SSL verification : Leave Enable SSL verification selected. Click Add webhook . When your webhook is configured, it is displayed in the list Project Webhooks for your repository, along with the ability to test events, edit or delete the webhook. Testing a webhook event displays the results on each page whether it succeeded or failed. Additional resources For more information, see Webhooks . 27.3. Viewing the payload output You can view the entire payload exposed as an extra variable. Procedure From the navigation panel, select Automation Execution Jobs . Select the job template with the webhook enabled. Select the Details tab. In the Extra Variables field, view the payload output from the awx_webhook_payload variable, as shown in the following example:
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/controller-work-with-webhooks
Managing Transactions on JBoss EAP
Managing Transactions on JBoss EAP Red Hat JBoss Enterprise Application Platform 7.4 Instructions and information for administrators to troubleshoot Red Hat JBoss Enterprise Application Platform transactions. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/managing_transactions_on_jboss_eap/index
Chapter 7. RangeAllocation [security.openshift.io/v1]
Chapter 7. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required range data 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data string data is a byte array representing the serialized state of a range allocation. It is a bitmap with each bit set to one to represent a range is taken. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata range string range is a string representing a unique label for a range of uids, "1000000000-2000000000/10000". 7.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/rangeallocations DELETE : delete collection of RangeAllocation GET : list or watch objects of kind RangeAllocation POST : create a RangeAllocation /apis/security.openshift.io/v1/watch/rangeallocations GET : watch individual changes to a list of RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead. /apis/security.openshift.io/v1/rangeallocations/{name} DELETE : delete a RangeAllocation GET : read the specified RangeAllocation PATCH : partially update the specified RangeAllocation PUT : replace the specified RangeAllocation /apis/security.openshift.io/v1/watch/rangeallocations/{name} GET : watch changes to an object of kind RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/security.openshift.io/v1/rangeallocations Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of RangeAllocation Table 7.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.3. Body parameters Parameter Type Description body DeleteOptions schema Table 7.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RangeAllocation Table 7.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.6. HTTP responses HTTP code Reponse body 200 - OK RangeAllocationList schema 401 - Unauthorized Empty HTTP method POST Description create a RangeAllocation Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.8. Body parameters Parameter Type Description body RangeAllocation schema Table 7.9. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 202 - Accepted RangeAllocation schema 401 - Unauthorized Empty 7.2.2. /apis/security.openshift.io/v1/watch/rangeallocations Table 7.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/security.openshift.io/v1/rangeallocations/{name} Table 7.12. Global path parameters Parameter Type Description name string name of the RangeAllocation Table 7.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a RangeAllocation Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.15. Body parameters Parameter Type Description body DeleteOptions schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RangeAllocation Table 7.17. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RangeAllocation Table 7.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.19. Body parameters Parameter Type Description body Patch schema Table 7.20. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RangeAllocation Table 7.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.22. Body parameters Parameter Type Description body RangeAllocation schema Table 7.23. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 401 - Unauthorized Empty 7.2.4. /apis/security.openshift.io/v1/watch/rangeallocations/{name} Table 7.24. Global path parameters Parameter Type Description name string name of the RangeAllocation Table 7.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_apis/rangeallocation-security-openshift-io-v1
Part III. Troubleshooting
Part III. Troubleshooting
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/troubleshooting
4.2. Set the VDB Version
4.2. Set the VDB Version You can set the version in one of two ways: through the vdb.xml file, (which is useful for dynamic VDBs), or by specifying a naming convention in the deployment file (such as VDBNAME.VERSION .vdb ). The deployer is responsible for choosing an appropriate version number. If there is already a VDB name and version combination that matches the current deployment, then connections to the VDB will be terminated and its cache entries will be flushed. Any new connections will then be made to the new VDB.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/set_the_vdb_version1
14.3. Comparing Images
14.3. Comparing Images Compare the contents of two specified image files ( imgname1 and imgname2 ) with the qemu-img compare command. Optionally, specify the files' format types ( fmt ). The images can have different formats and settings. By default, images with different sizes are considered identical if the larger image contains only unallocated or zeroed sectors in the area after the end of the other image. In addition, if any sector is not allocated in one image and contains only zero bytes in the other one, it is evaluated as equal. If you specify the -s option, the images are not considered identical if the image sizes differ or a sector is allocated in one image and is not allocated in the second one. The qemu-img compare command exits with one of the following exit codes: 0 - The images are identical 1 - The images are different 2 - There was an error opening one of the images 3 - There was an error checking a sector allocation 4 - There was an error reading the data
[ "qemu-img compare [-f fmt ] [-F fmt ] [-p] [-s] [-q] imgname1 imgname2" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-comparing_images
Network APIs
Network APIs OpenShift Container Platform 4.17 Reference guide for network APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/index
4.289. sed
4.289. sed 4.289.1. RHBA-2011:1116 - sed bug fix update An updated sed package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The sed package provides is a stream or batch (non-interactive) editor that takes text as input, performs an operation or a set of operations on the text, and outputs the modified text. Bug Fixes BZ# 721349 Prior to this update, the is_selinux_disabled() function was not correctly checked. With this update, this check returns the correct value and now the check works as expected. BZ# 679921 Prior to this update, the behavior of the i/--in-place option for symlinks and hardlinks was not clearly documented. With this update, the manpage and the user documentation has been improved and this problem is resolved. All sed users are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/sed
Chapter 22. Load balancing on RHOSP
Chapter 22. Load balancing on RHOSP 22.1. Using the Octavia OVN load balancer provider driver with Kuryr SDN If your OpenShift Container Platform cluster uses Kuryr and was installed on a Red Hat OpenStack Platform (RHOSP) 13 cloud that was later upgraded to RHOSP 16, you can configure it to use the Octavia OVN provider driver. Important Kuryr replaces existing load balancers after you change provider drivers. This process results in some downtime. Prerequisites Install the RHOSP CLI, openstack . Install the OpenShift Container Platform CLI, oc . Verify that the Octavia OVN driver on RHOSP is enabled. Tip To view a list of available Octavia drivers, on a command line, enter openstack loadbalancer provider list . The ovn driver is displayed in the command's output. Procedure To change from the Octavia Amphora provider driver to Octavia OVN: Open the kuryr-config ConfigMap. On a command line, enter: USD oc -n openshift-kuryr edit cm kuryr-config In the ConfigMap, delete the line that contains kuryr-octavia-provider: default . For example: ... kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1 ... 1 Delete this line. The cluster will regenerate it with ovn as the value. Wait for the Cluster Network Operator to detect the modification and to redeploy the kuryr-controller and kuryr-cni pods. This process might take several minutes. Verify that the kuryr-config ConfigMap annotation is present with ovn as its value. On a command line, enter: USD oc -n openshift-kuryr edit cm kuryr-config The ovn provider value is displayed in the output: ... kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn ... Verify that RHOSP recreated its load balancers. On a command line, enter: USD openstack loadbalancer list | grep amphora A single Amphora load balancer is displayed. For example: a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora Search for ovn load balancers by entering: USD openstack loadbalancer list | grep ovn The remaining load balancers of the ovn type are displayed. For example: 2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn 22.2. Scaling clusters for application traffic by using Octavia OpenShift Container Platform clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create. If your cluster uses Kuryr, the Cluster Network Operator created an internal Octavia load balancer at deployment. You can use this load balancer for application network scaling. If your cluster does not use Kuryr, you must create your own Octavia load balancer to use it for application network scaling. 22.2.1. Scaling clusters by using Octavia If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it. Prerequisites Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure From a command line, create an Octavia load balancer that uses the Amphora driver: USD openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet> You can use a name of your choice instead of API_OCP_CLUSTER . After the load balancer becomes active, create listeners: USD openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER Note To view the status of the load balancer, enter openstack loadbalancer list . Create a pool that uses the round robin algorithm and has session persistence enabled: USD openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS To ensure that control plane machines are available, create a health monitor: USD openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443 Add the control plane machines as members of the load balancer pool: USD for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done Optional: To reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP Your cluster now uses Octavia for load balancing. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 22.2.2. Scaling clusters that use Kuryr by using Octavia If your cluster uses Kuryr, associate the API floating IP address of your cluster with the pre-existing Octavia load balancer. Prerequisites Your OpenShift Container Platform cluster uses Kuryr. Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure Optional: From a command line, to reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP Your cluster now uses Octavia for load balancing. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 22.3. Scaling for ingress traffic by using RHOSP Octavia You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr. Prerequisites Your OpenShift Container Platform cluster uses Kuryr. Octavia is available on your RHOSP deployment. Procedure To copy the current internal router service, on a command line, enter: USD oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml In the file external_router.yaml , change the values of metadata.name and spec.type to LoadBalancer . Example router file apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2 1 Ensure that this value is descriptive, like router-external-default . 2 Ensure that this value is LoadBalancer . Note You can delete timestamps and other information that is irrelevant to load balancing. From a command line, create a service from the external_router.yaml file: USD oc apply -f external_router.yaml Verify that the external IP address of the service is the same as the one that is associated with the load balancer: On a command line, retrieve the external IP address of the service: USD oc -n openshift-ingress get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h Retrieve the IP address of the load balancer: USD openstack loadbalancer list | grep router-external Example output | 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia | Verify that the addresses you retrieved in the steps are associated with each other in the floating IP list: USD openstack floating ip list | grep 172.30.235.33 Example output | e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c | You can now use the value of EXTERNAL-IP as the new Ingress address. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 22.4. Configuring an external load balancer You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use an external load balancer in place of the default load balancer. Prerequisites On your load balancer, TCP over ports 6443, 443, and 80 must be available to any users of your system. Load balance the API port, 6443, between each of the control plane nodes. Load balance the application ports, 443 and 80, between all of the compute nodes. On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster. Your load balancer must be able to access every machine in your cluster. Methods to allow this access include: Attaching the load balancer to the cluster's machine subnet. Attaching floating IP addresses to machines that use the load balancer. Important External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. Procedure Enable access to the cluster from your load balancer on ports 6443, 443, and 80. As an example, note this HAProxy configuration: A section of a sample HAProxy configuration ... listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listen my-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listen my-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check Add records to your DNS server for the cluster API and apps over the load balancer. For example: <load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain> From a command line, use curl to verify that the external load balancer and DNS configuration are operational. Verify that the cluster API is accessible: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that cluster applications are accessible: Note You can also verify application accessibility by opening the OpenShift Container Platform console in a web browser. USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, you receive an HTTP response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private
[ "oc -n openshift-kuryr edit cm kuryr-config", "kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1", "oc -n openshift-kuryr edit cm kuryr-config", "kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn", "openstack loadbalancer list | grep amphora", "a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora", "openstack loadbalancer list | grep ovn", "2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn", "openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>", "openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER", "openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS", "openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443", "for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done", "openstack floating ip unset USDAPI_FIP", "openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP", "openstack floating ip unset USDAPI_FIP", "openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP", "oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml", "apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2", "oc apply -f external_router.yaml", "oc -n openshift-ingress get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h", "openstack loadbalancer list | grep router-external", "| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |", "openstack floating ip list | grep 172.30.235.33", "| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |", "listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listen my-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listen my-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check", "<load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain>", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/load-balancing-openstack
13.2. Enabling the Ctrl+Alt+Backspace Shortcut
13.2. Enabling the Ctrl+Alt+Backspace Shortcut The Ctrl + Alt + Backspace shortcut key combination is used for terminating the X server. You might want to terminate the X server especially when: a program caused the X server to stop working. you need to switch from your logged-in session quickly. you have launched a program that failed. you cannot operate in the current session for various reason. your screen freezes. To enable the Ctrl + Alt + Backspace shortcut to forcibly terminate the X server by default for all users, you need to set the org.gnome.desktop.input-sources.xkb-options GSettings key. (For more information on GSettings keys, see Section 9.6, "GSettings Keys Properties" .) Procedure 13.2. Enabling the Ctrl-Alt-Backspace Shortcut Create a local database for machine-wide settings in /etc/dconf/db/local.d/00-input-sources : Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/input-sources : Update the system databases for the changes to take effect: Users must log out and back in again before the system-wide settings take effect. The Ctrl + Alt + Backspace key combination is now enabled. All users can terminate the X server quickly and easily and doing so bring themselves back to the login prompt.
[ "Enable Ctrl-Alt-Backspace for all users xkb-options=['terminate:ctrl_alt_bksp']", "Lock the list of enabled XKB options /org/gnome/desktop/input-sources/xkb-options", "dconf update" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/enable-ctrl-alt-backspace
2.8.5.3. DMZs and IPTables
2.8.5.3. DMZs and IPTables You can create iptables rules to route traffic to certain machines, such as a dedicated HTTP or FTP server, in a demilitarized zone ( DMZ ). A DMZ is a special local subnetwork dedicated to providing services on a public carrier, such as the Internet. For example, to set a rule for routing incoming HTTP requests to a dedicated HTTP server at 10.0.4.2 (outside of the 192.168.1.0/24 range of the LAN), NAT uses the PREROUTING table to forward the packets to the appropriate destination: With this command, all HTTP connections to port 80 from outside of the LAN are routed to the HTTP server on a network separate from the rest of the internal network. This form of network segmentation can prove safer than allowing HTTP connections to a machine on the network. If the HTTP server is configured to accept secure connections, then port 443 must be forwarded as well.
[ "~]# iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.0.4.2:80" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-forward_and_nat_rules-dmzs_and_iptables
Chapter 1. Performing storage operations in Red Hat OpenStack Services on OpenShift
Chapter 1. Performing storage operations in Red Hat OpenStack Services on OpenShift Red Hat OpenStack Services on OpenShift (RHOSO) provides the following storage services: Block Storage service (cinder) Image service (glance) Object Storage service (swift) Shared File Systems service (manila) You can manage cloud storage by using either the RHOSO Dashboard (horizon) or the OpenStack command-line interface (CLI). You can perform most procedures by using either method, but you can only complete some of the more advanced procedures by using the OpenStack CLI. 1.1. Block storage (cinder) The Block Storage service (cinder) allows users to provision block storage volumes on back ends. Users can attach volumes to instances to augment their ephemeral storage with general-purpose persistent storage. You can detach and re-attach volumes to instances, but you can only access these volumes through the attached instance. You can also configure instances so that they do not use ephemeral storage. Instead of using ephemeral storage, you can configure the Block Storage service to write images to a volume. You can then use the volume as a bootable root volume for an instance. Volumes also provide inherent redundancy and disaster recovery through backups and snapshots. However, backups are only provided if you deploy the optional Block Storage backup service. In addition, you can encrypt volumes for added security. 1.2. Images (glance) The Image service (glance) provides discovery, registration, and delivery services for instance images. It also provides the ability to store snapshots of instances ephemeral disks for cloning or restore purposes. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services. 1.3. Object Storage (swift) The Object Storage service (swift) provides a fully-distributed storage solution that you can use to store any kind of static data or binary object; such as media files, large datasets, and disk images. The Object Storage service organizes objects by using object containers, which are similar to directories in a file system, but they cannot be nested. You can use the Object Storage service as a repository for nearly every service in the cloud. Red Hat Ceph Storage RGW can be used as an alternative to the Object Storage service. 1.4. Shared File Systems (manila) The Shared File Systems service (manila) provides the means to provision remote, shareable file systems. These are known as shares. Shares allow projects in the cloud to share POSIX compliant storage, and they can be consumed by multiple instances simultaneously. Shares are used for instance consumption, and they can be consumed by multiple instances at the same time with read/write access mode. 1.5. Customizing and managing Red Hat Ceph Storage Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports Red Hat Ceph Storage 7. For information on the customization and management of Red Hat Ceph Storage 7, refer to the Red Hat Ceph Storage documentation . The following guides contain key information and procedures for these tasks: Administration Guide Configuration Guide Operations Guide Data Security and Hardening Guide Dashboard Guide Troubleshooting Guide
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_storage_operations/assembly_introduction-to-storage-operations_osp
5.7. Multipath Command Output
5.7. Multipath Command Output When you create, modify, or list a multipath device, you get a display of the current device setup. The format is as follows. For each multipath device: For each path group: For each path: For example, the output of a multipath command might appear as follows: If the path is up and ready for I/O, the status of the path is ready or ghost . If the path is down, the status is faulty or shaky . The path status is updated periodically by the multipathd daemon based on the polling interval defined in the /etc/multipath.conf file. The dm status is similar to the path status, but from the kernel's point of view. The dm status has two states: failed , which is analogous to faulty , and active which covers all other path states. Occasionally, the path state and the dm state of a device will temporarily not agree. The possible values for online_status are running and offline . A status of offline means that this SCSI device has been disabled. Note When a multipath device is being created or modified, the path group status, the dm device name, the write permissions, and the dm status are not known. Also, the features are not always correct.
[ "action_if_any: alias (wwid_if_different_from_alias) dm_device_name_if_known vendor,product size=size features='features' hwhandler='hardware_handler' wp=write_permission_if_known", "-+- policy='scheduling_policy' prio=prio_if_known status=path_group_status_if_known", "`- host:channel:id:lun devnode major:minor dm_status_if_known path_status online_status", "3600d0230000000000e13955cc3757800 dm-1 WINSYS,SF2372 size=269G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 6:0:0:0 sdb 8:16 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 7:0:0:0 sdf 8:80 active ready running" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/mpio_output
Chapter 3. Manually scaling a compute machine set
Chapter 3. Manually scaling a compute machine set You can add or remove an instance of a machine in a compute machine set. Note If you need to modify aspects of a compute machine set outside of scaling, see Modifying a compute machine set . 3.1. Prerequisites If you enabled the cluster-wide proxy and scale up compute machines not included in networking.machineNetwork[].cidr from the installation configuration, you must add the compute machines to the Proxy object's noProxy field to prevent connection issues. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 3.2. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machines.machine.openshift.io -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api Or: USD oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines.machine.openshift.io 3.3. The compute machine set deletion policy Random , Newest , and Oldest are the three supported deletion options. The default is Random , meaning that random machines are chosen and deleted when scaling compute machine sets down. The deletion policy can be set according to the use case by modifying the particular compute machine set: spec: deletePolicy: <delete_policy> replicas: <desired_replica_count> Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/delete-machine=true to the machine of interest, regardless of the deletion policy. Important By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker compute machine set to 0 unless you first relocate the router pods. Note Custom compute machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker compute machine sets are scaling down. This prevents service disruption. 3.4. Additional resources Lifecycle hooks for the machine deletion phase
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_management/manually-scaling-machineset
Chapter 3. Understanding Ansible concepts
Chapter 3. Understanding Ansible concepts As a automation developer, review the following Ansible concepts to create successful Ansible playbooks and automation execution environments before beginning your Ansible development project. 3.1. Prerequisites Ansible is installed. For information about installing Ansible, see Installing Ansible in the Ansible documentation. 3.2. About Ansible Playbooks Playbooks are files written in YAML that contain specific sets of human-readable instructions, or "plays", that you send to run on a single target or groups of targets. Playbooks can be used to manage configurations of and deployments to remote machines, as well as sequence multi-tier rollouts involving rolling updates. Use playbooks to delegate actions to other hosts, interacting with monitoring servers and load balancers along the way. Once written, playbooks can be used repeatedly across your enterprise for automation. 3.3. About Ansible Roles A role is Ansible's way of bundling automation content in addition to loading related vars, files, tasks, handlers, and other artifacts automatically by utilizing a known file structure. Instead of creating huge playbooks with hundreds of tasks, you can use roles to break the tasks apart into smaller, more discrete units of work. You can find roles for provisioning infrastructure, deploying applications, and all of the tasks you do every day on Ansible Galaxy. Filter your search by Type and select Role . Once you find a role that you are interested in, you can download it by using the ansible-galaxy command that comes bundled with Ansible: USD ansible-galaxy role install username.rolename 3.4. About Content Collections An Ansible Content Collection is a ready-to-use toolkit for automation. It includes several types of content such as playbooks, roles, modules, and plugins all in one place. The following diagram shows the basic structure of a collection: collection/ ├── docs/ ├── galaxy.yml ├── meta/ │ └── runtime.yml ├── plugins/ │ ├── modules/ │ │ └── module1.py │ ├── inventory/ │ ├── lookup/ │ ├── filter/ │ └── .../ ├── README.md ├── roles/ │ ├── role1/ │ ├── role2/ │ └── .../ ├── playbooks/ │ ├── files/ │ ├── vars/ │ ├── templates/ │ ├── playbook1.yml │ └── tasks/ └── tests/ ├── integration/ └── unit/ In Red Hat Ansible Automation Platform, automation hub serves as the source for Ansible Certified Content Collections. 3.5. About execution environments Automation execution environments are consistent and shareable container images that serve as Ansible control nodes. Automation execution environments reduce the challenge of sharing Ansible content that has external dependencies. Automation execution environments contain: Ansible Core Ansible Runner Ansible Collections Python libraries System dependencies Custom user needs You can define and create an automation execution environment using Ansible Builder. Additional resources For more information about Ansible Builder, see Creating and Consuming Execution Environments .
[ "ansible-galaxy role install username.rolename", "collection/ ├── docs/ ├── galaxy.yml ├── meta/ │ └── runtime.yml ├── plugins/ │ ├── modules/ │ │ └── module1.py │ ├── inventory/ │ ├── lookup/ │ ├── filter/ │ └── .../ ├── README.md ├── roles/ │ ├── role1/ │ ├── role2/ │ └── .../ ├── playbooks/ │ ├── files/ │ ├── vars/ │ ├── templates/ │ ├── playbook1.yml │ └── tasks/ └── tests/ ├── integration/ └── unit/" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_creator_guide/understanding_ansible_concepts
Chapter 12. Tagging virtual devices
Chapter 12. Tagging virtual devices In Red Hat OpenStack Platform (RHOSP), if you attach multiple network interfaces or block devices to an instance, you can use device tagging to communicate the intended role of each device to the instance operating system. Tags are assigned to devices at instance boot time, and are available to the instance operating system through the metadata API and the configuration drive, if enabled. You can also tag virtual devices to a running instance. For more information, see the following procedures: Attaching a network to an instance Attaching a volume to an instance Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command, for example: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Create your instance with a virtual block device tag and a virtual network device tag: Replace <myNicTag> with the name of the tag for the virtual NIC device. You can add as many tagged virtual devices as you require. Replace <myVolumeTag> with the name of the tag for the virtual storage device. You can add as many tagged virtual devices as you require. Verify that the virtual device tags have been added to the instance metadata by using one of the following methods: Retrieve the device tag metadata from the metadata API by using GET /openstack/latest/meta_data.json . If the configuration drive is enabled and mounted under /configdrive on the instance operating system, view the /configdrive/openstack/latest/meta_data.json file. Example meta_data.json file:
[ "openstack flavor list --os-cloud <cloud_name>", "`export OS_CLOUD=<cloud_name>`", "openstack server create --flavor m1.tiny --image cirros --network <network_UUID> --nic net-id=<network_UUID>,tag=<myNicTag> --block-device id=<volume_ID>,bus=virtio,tag=<myVolumeTag> myTaggedDevicesInstance", "{ \"devices\": [ { \"type\": \"nic\", \"bus\": \"pci\", \"address\": \"0030:00:02.0\", \"mac\": \"aa:00:00:00:01\", \"tags\": [\"myNicTag\"] }, { \"type\": \"disk\", \"bus\": \"pci\", \"address\": \"0030:00:07.0\", \"serial\": \"disk-vol-227\", \"tags\": [\"myVolumeTag\"] } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/creating_and_managing_instances/tag-virt-devices_instances
23.8. Memory Allocation
23.8. Memory Allocation In cases where the guest virtual machine crashes, the optional attribute dumpCore can be used to control whether the guest virtual machine's memory should be included in the generated core dump( dumpCore='on' ) or not included ( dumpCore='off' ). Note that the default setting is on , so unless the parameter is set to off , the guest virtual machine memory will be included in the core dumpfile. The <maxMemory> element determines maximum run-time memory allocation of the guest. The slots attribute specifies the number of slots available for adding memory to the guest. The <memory> element specifies the maximum allocation of memory for the guest at boot time. This can also be set using the NUMA cell size configuration, and can be increased by hot-plugging of memory to the limit specified by maxMemory . The <currentMemory> element determines the actual memory allocation for a guest virtual machine. This value can be less than the maximum allocation (set by <memory> ) to allow for the guest virtual machine memory to balloon as needed. If omitted, this defaults to the same value as the <memory> element. The unit attribute behaves the same as for memory. <domain> <maxMemory slots='16' unit='KiB'>1524288</maxMemory> <memory unit='KiB' dumpCore='off'>524288</memory> <!-- changes the memory unit to KiB and does not allow the guest virtual machine's memory to be included in the generated core dumpfile --> <currentMemory unit='KiB'>524288</currentMemory> <!-- makes the current memory unit 524288 KiB --> ... </domain> Figure 23.10. Memory unit
[ "<domain> <maxMemory slots='16' unit='KiB'>1524288</maxMemory> <memory unit='KiB' dumpCore='off'>524288</memory> <!-- changes the memory unit to KiB and does not allow the guest virtual machine's memory to be included in the generated core dumpfile --> <currentMemory unit='KiB'>524288</currentMemory> <!-- makes the current memory unit 524288 KiB --> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Memory_allocation
Index
Index Symbols /lib/udev/rules.d directory, udev Integration with the Device Mapper A activating logical volumes individual nodes, Activating Logical Volumes on Individual Nodes in a Cluster activating volume groups, Activating and Deactivating Volume Groups administrative procedures, LVM Administration Overview allocation, LVM Allocation policy, Creating Volume Groups preventing, Preventing Allocation on a Physical Volume archive file, Logical Volume Backup , Backing Up Volume Group Metadata B backup file, Logical Volume Backup metadata, Logical Volume Backup , Backing Up Volume Group Metadata backup file, Backing Up Volume Group Metadata block device scanning, Scanning for Block Devices C cache file building, Scanning Disks for Volume Groups to Build the Cache File cache logical volume creation, Creating LVM Cache Logical Volumes cache volumes, Cache Volumes cluster environment, LVM Logical Volumes in a Red Hat High Availability Cluster CLVM definition, LVM Logical Volumes in a Red Hat High Availability Cluster command line units, Using CLI Commands configuration examples, LVM Configuration Examples creating logical volume, Creating Linear Logical Volumes logical volume, example, Creating an LVM Logical Volume on Three Disks physical volumes, Creating Physical Volumes striped logical volume, example, Creating a Striped Logical Volume volume group, clustered, Creating Volume Groups in a Cluster volume groups, Creating Volume Groups creating LVM volumes overview, Logical Volume Creation Overview D data relocation, online, Online Data Relocation deactivating volume groups, Activating and Deactivating Volume Groups device numbers major, Persistent Device Numbers minor, Persistent Device Numbers persistent, Persistent Device Numbers device path names, Using CLI Commands device scan filters, Controlling LVM Device Scans with Filters device size, maximum, Creating Volume Groups device special file directory, Creating Volume Groups display sorting output, Sorting LVM Reports displaying logical volumes, Displaying Logical Volumes , The lvs Command physical volumes, Displaying Physical Volumes , The pvs Command volume groups, Displaying Volume Groups , The vgs Command E extent allocation, Creating Volume Groups , LVM Allocation definition, Volume Groups , Creating Volume Groups F features, new and changed, New and Changed Features file system growing on a logical volume, Growing a File System on a Logical Volume filters, Controlling LVM Device Scans with Filters G growing file system logical volume, Growing a File System on a Logical Volume H help display, Using CLI Commands I initializing partitions, Initializing Physical Volumes physical volumes, Initializing Physical Volumes Insufficient Free Extents message, Insufficient Free Extents for a Logical Volume L linear logical volume converting to mirrored, Changing Mirrored Volume Configuration creation, Creating Linear Logical Volumes definition, Linear Volumes logging, Logging logical volume activation, Controlling Logical Volume Activation administration, general, Logical Volume Administration cache, Creating LVM Cache Logical Volumes changing parameters, Changing the Parameters of a Logical Volume Group creation, Creating Linear Logical Volumes creation example, Creating an LVM Logical Volume on Three Disks definition, Logical Volumes , LVM Logical Volumes displaying, Displaying Logical Volumes , Customized Reporting for LVM , The lvs Command exclusive access, Activating Logical Volumes on Individual Nodes in a Cluster extending, Growing Logical Volumes growing, Growing Logical Volumes historical, Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later) linear, Creating Linear Logical Volumes local access, Activating Logical Volumes on Individual Nodes in a Cluster lvs display arguments, The lvs Command mirrored, Creating Mirrored Volumes reducing, Shrinking Logical Volumes removing, Removing Logical Volumes renaming, Renaming Logical Volumes snapshot, Creating Snapshot Volumes striped, Creating Striped Volumes thinly-provisioned, Creating Thinly-Provisioned Logical Volumes thinly-provisioned snapshot, Creating Thinly-Provisioned Snapshot Volumes lvchange command, Changing the Parameters of a Logical Volume Group lvconvert command, Changing Mirrored Volume Configuration lvcreate command, Creating Linear Logical Volumes lvdisplay command, Displaying Logical Volumes lvextend command, Growing Logical Volumes LVM architecture overview, LVM Architecture Overview clustered, LVM Logical Volumes in a Red Hat High Availability Cluster components, LVM Architecture Overview , LVM Components custom report format, Customized Reporting for LVM directory structure, Creating Volume Groups help, Using CLI Commands label, Physical Volumes logging, Logging logical volume administration, Logical Volume Administration physical volume administration, Physical Volume Administration physical volume, definition, Physical Volumes volume group, definition, Volume Groups lvmdiskscan command, Scanning for Block Devices lvmetad daemon, The Metadata Daemon (lvmetad) lvreduce command, Shrinking Logical Volumes lvremove command, Removing Logical Volumes lvrename command, Renaming Logical Volumes lvs command, Customized Reporting for LVM , The lvs Command display arguments, The lvs Command lvscan command, Displaying Logical Volumes M man page display, Using CLI Commands metadata backup, Logical Volume Backup , Backing Up Volume Group Metadata recovery, Recovering Physical Volume Metadata metadata daemon, The Metadata Daemon (lvmetad) mirrored logical volume clustered, Creating a Mirrored LVM Logical Volume in a Cluster converting to linear, Changing Mirrored Volume Configuration creation, Creating Mirrored Volumes failure policy, Mirrored Logical Volume Failure Policy failure recovery, Recovering from LVM Mirror Failure reconfiguration, Changing Mirrored Volume Configuration mirror_image_fault_policy configuration parameter, Mirrored Logical Volume Failure Policy mirror_log_fault_policy configuration parameter, Mirrored Logical Volume Failure Policy O online data relocation, Online Data Relocation overview features, new and changed, New and Changed Features P partition type, setting, Setting the Partition Type partitions multiple, Multiple Partitions on a Disk path names, Using CLI Commands persistent device numbers, Persistent Device Numbers physical extent preventing allocation, Preventing Allocation on a Physical Volume physical volume adding to a volume group, Adding Physical Volumes to a Volume Group administration, general, Physical Volume Administration creating, Creating Physical Volumes definition, Physical Volumes display, The pvs Command displaying, Displaying Physical Volumes , Customized Reporting for LVM illustration, LVM Physical Volume Layout initializing, Initializing Physical Volumes layout, LVM Physical Volume Layout pvs display arguments, The pvs Command recovery, Replacing a Missing Physical Volume removing, Removing Physical Volumes removing from volume group, Removing Physical Volumes from a Volume Group removing lost volume, Removing Lost Physical Volumes from a Volume Group resizing, Resizing a Physical Volume pvdisplay command, Displaying Physical Volumes pvmove command, Online Data Relocation pvremove command, Removing Physical Volumes pvresize command, Resizing a Physical Volume pvs command, Customized Reporting for LVM display arguments, The pvs Command pvscan command, Displaying Physical Volumes R RAID logical volume, RAID Logical Volumes extending, Extending a RAID Volume growing, Extending a RAID Volume reducing logical volume, Shrinking Logical Volumes removing disk from a logical volume, Removing a Disk from a Logical Volume logical volume, Removing Logical Volumes physical volumes, Removing Physical Volumes renaming logical volume, Renaming Logical Volumes volume group, Renaming a Volume Group report format, LVM devices, Customized Reporting for LVM resizing physical volume, Resizing a Physical Volume rules.d directory, udev Integration with the Device Mapper S scanning block devices, Scanning for Block Devices scanning devices, filters, Controlling LVM Device Scans with Filters snapshot logical volume creation, Creating Snapshot Volumes snapshot volume definition, Snapshot Volumes striped logical volume creation, Creating Striped Volumes creation example, Creating a Striped Logical Volume definition, Striped Logical Volumes extending, Extending a Striped Volume growing, Extending a Striped Volume T thin snapshot volume, Thinly-Provisioned Snapshot Volumes thin volume creation, Creating Thinly-Provisioned Logical Volumes thinly-provisioned logical volume, Thinly-Provisioned Logical Volumes (Thin Volumes) creation, Creating Thinly-Provisioned Logical Volumes thinly-provisioned snapshot logical volume creation, Creating Thinly-Provisioned Snapshot Volumes thinly-provisioned snapshot volume, Thinly-Provisioned Snapshot Volumes troubleshooting, LVM Troubleshooting U udev device manager, Device Mapper Support for the udev Device Manager udev rules, udev Integration with the Device Mapper units, command line, Using CLI Commands V verbose output, Using CLI Commands vgcfgbackup command, Backing Up Volume Group Metadata vgcfgrestore command, Backing Up Volume Group Metadata vgchange command, Changing the Parameters of a Volume Group vgcreate command, Creating Volume Groups , Creating Volume Groups in a Cluster vgdisplay command, Displaying Volume Groups vgexport command, Moving a Volume Group to Another System vgextend command, Adding Physical Volumes to a Volume Group vgimport command, Moving a Volume Group to Another System vgmerge command, Combining Volume Groups vgmknodes command, Recreating a Volume Group Directory vgreduce command, Removing Physical Volumes from a Volume Group vgrename command, Renaming a Volume Group vgs command, Customized Reporting for LVM display arguments, The vgs Command vgscan command, Scanning Disks for Volume Groups to Build the Cache File vgsplit command, Splitting a Volume Group volume group activating, Activating and Deactivating Volume Groups administration, general, Volume Group Administration changing parameters, Changing the Parameters of a Volume Group combining, Combining Volume Groups creating, Creating Volume Groups creating in a cluster, Creating Volume Groups in a Cluster deactivating, Activating and Deactivating Volume Groups definition, Volume Groups displaying, Displaying Volume Groups , Customized Reporting for LVM , The vgs Command extending, Adding Physical Volumes to a Volume Group growing, Adding Physical Volumes to a Volume Group merging, Combining Volume Groups moving between systems, Moving a Volume Group to Another System reducing, Removing Physical Volumes from a Volume Group removing, Removing Volume Groups renaming, Renaming a Volume Group shrinking, Removing Physical Volumes from a Volume Group splitting, Splitting a Volume Group example procedure, Splitting a Volume Group vgs display arguments, The vgs Command
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/ix01
Chapter 11. Monitoring application health by using health checks
Chapter 11. Monitoring application health by using health checks In software systems, components can become unhealthy due to transient issues such as temporary connectivity loss, configuration errors, or problems with external dependencies. OpenShift Container Platform applications have a number of options to detect and handle unhealthy containers. 11.1. Understanding health checks A health check periodically performs diagnostics on a running container using any combination of the readiness, liveness, and startup health checks. You can include one or more probes in the specification for the pod that contains the container which you want to perform the health checks. Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Readiness probe A readiness probe determines if a container is ready to accept service requests. If the readiness probe fails for a container, the kubelet removes the pod from the list of available service endpoints. After a failure, the probe continues to examine the pod. If the pod becomes available, the kubelet adds the pod to the list of available service endpoints. Liveness health check A liveness probe determines if a container is still running. If the liveness probe fails due to a condition such as a deadlock, the kubelet kills the container. The pod then responds based on its restart policy. For example, a liveness probe on a pod with a restartPolicy of Always or OnFailure kills and restarts the container. Startup probe A startup probe indicates whether the application within a container is started. All other probes are disabled until the startup succeeds. If the startup probe does not succeed within a specified time period, the kubelet kills the container, and the container is subject to the pod restartPolicy . Some applications can require additional startup time on their first initialization. You can use a startup probe with a liveness or readiness probe to delay that probe long enough to handle lengthy start-up time using the failureThreshold and periodSeconds parameters. For example, you can add a startup probe, with a failureThreshold of 30 failures and a periodSeconds of 10 seconds (30 * 10s = 300s) for a maximum of 5 minutes, to a liveness probe. After the startup probe succeeds the first time, the liveness probe takes over. You can configure liveness, readiness, and startup probes with any of the following types of tests: HTTP GET : When using an HTTP GET test, the test determines the healthiness of the container by using a web hook. The test is successful if the HTTP response code is between 200 and 399 . You can use an HTTP GET test with applications that return HTTP status codes when completely initialized. Container Command: When using a container command test, the probe executes a command inside the container. The probe is successful if the test exits with a 0 status. TCP socket: When using a TCP socket test, the probe attempts to open a socket to the container. The container is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. You can configure several fields to control the behavior of a probe: initialDelaySeconds : The time, in seconds, after the container starts before the probe can be scheduled. The default is 0. periodSeconds : The delay, in seconds, between performing probes. The default is 10 . This value must be greater than timeoutSeconds . timeoutSeconds : The number of seconds of inactivity after which the probe times out and the container is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . successThreshold : The number of times that the probe must report success after a failure to reset the container status to successful. The value must be 1 for a liveness probe. The default is 1 . failureThreshold : The number of times that the probe is allowed to fail. The default is 3. After the specified attempts: for a liveness probe, the container is restarted for a readiness probe, the pod is marked Unready for a startup probe, the container is killed and is subject to the pod's restartPolicy Example probes The following are samples of different probes as they would appear in an object specification. Sample readiness probe with a container command readiness probe in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy ... 1 The container name. 2 The container image to deploy. 3 A readiness probe. 4 A container command test. 5 The commands to execute on the container. Sample container command startup probe and liveness probe with container command tests in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11 ... 1 The container name. 2 Specify the container image to deploy. 3 A liveness probe. 4 An HTTP GET test. 5 The internet scheme: HTTP or HTTPS . The default value is HTTP . 6 The port on which the container is listening. 7 A startup probe. 8 An HTTP GET test. 9 The port on which the container is listening. 10 The number of times to try the probe after a failure. 11 The number of seconds to perform the probe. Sample liveness probe with a container command test that uses a timeout in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8 ... 1 The container name. 2 Specify the container image to deploy. 3 The liveness probe. 4 The type of probe, here a container command probe. 5 The command line to execute inside the container. 6 How often in seconds to perform the probe. 7 The number of consecutive successes needed to show success after a failure. 8 The number of times to try the probe after a failure. Sample readiness probe and liveness probe with a TCP socket test in a deployment kind: Deployment apiVersion: apps/v1 ... spec: ... template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 ... 1 The readiness probe. 2 The liveness probe. 11.2. Configuring health checks using the CLI To configure readiness, liveness, and startup probes, add one or more probes to the specification for the pod that contains the container which you want to perform the health checks Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Procedure To add probes for a container: Create a Pod object to add one or more probes: apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19 1 Specify the container name. 2 Specify the container image to deploy. 3 Optional: Create a Liveness probe. 4 Specify a test to perform, here a TCP Socket test. 5 Specify the port on which the container is listening. 6 Specify the time, in seconds, after the container starts before the probe can be scheduled. 7 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 8 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . 9 Optional: Create a Readiness probe. 10 Specify the type of test to perform, here an HTTP test. 11 Specify a host IP address. When host is not defined, the PodIP is used. 12 Specify HTTP or HTTPS . When scheme is not defined, the HTTP scheme is used. 13 Specify the port on which the container is listening. 14 Optional: Create a Startup probe. 15 Specify the type of test to perform, here an Container Execution probe. 16 Specify the commands to execute on the container. 17 Specify the number of times to try the probe after a failure. 18 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 19 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . Note If the initialDelaySeconds value is lower than the periodSeconds value, the first Readiness probe occurs at some point between the two periods due to an issue with timers. The timeoutSeconds value must be lower than the periodSeconds value. Create the Pod object: USD oc create -f <file-name>.yaml Verify the state of the health check pod: USD oc describe pod health-check Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image "k8s.gcr.io/liveness" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image "k8s.gcr.io/liveness" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container The following is the output of a failed probe that restarted a container: Sample Liveness check output with unhealthy container USD oc describe pod pod1 Example output .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "k8s.gcr.io/liveness" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "k8s.gcr.io/liveness" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image "k8s.gcr.io/liveness" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "k8s.gcr.io/liveness" in 244.116568ms 11.3. Monitoring application health using the Developer perspective You can use the Developer perspective to add three types of health probes to your container to ensure that your application is healthy: Use the Readiness probe to check if the container is ready to handle requests. Use the Liveness probe to check if the container is running. Use the Startup probe to check if the application within the container has started. You can add health checks either while creating and deploying an application, or after you have deployed an application. 11.4. Adding health checks using the Developer perspective You can use the Topology view to add health checks to your deployed application. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. Procedure In the Topology view, click on the application node to see the side panel. If the container does not have health checks added to ensure the smooth running of your application, a Health Checks notification is displayed with a link to add health checks. In the displayed notification, click the Add Health Checks link. Alternatively, you can also click the Actions drop-down list and select Add Health Checks . Note that if the container already has health checks, you will see the Edit Health Checks option instead of the add option. In the Add Health Checks form, if you have deployed multiple containers, use the Container drop-down list to ensure that the appropriate container is selected. Click the required health probe links to add them to the container. Default data for the health checks is prepopulated. You can add the probes with the default data or further customize the values and then add them. For example, to add a Readiness probe that checks if your container is ready to handle requests: Click Add Readiness Probe , to see a form containing the parameters for the probe. Click the Type drop-down list to select the request type you want to add. For example, in this case, select Container Command to select the command that will be executed inside the container. In the Command field, add an argument cat , similarly, you can add multiple arguments for the check, for example, add another argument /tmp/healthy . Retain or modify the default values for the other parameters as required. Note The Timeout value must be lower than the Period value. The Timeout default value is 1 . The Period default value is 10 . Click the check mark at the bottom of the form. The Readiness Probe Added message is displayed. Click Add to add the health check. You are redirected to the Topology view and the container is restarted. In the side panel, verify that the probes have been added by clicking on the deployed pod under the Pods section. In the Pod Details page, click the listed container in the Containers section. In the Container Details page, verify that the Readiness probe - Exec Command cat /tmp/healthy has been added to the container. 11.5. Editing health checks using the Developer perspective You can use the Topology view to edit health checks added to your application, modify them, or add more health checks. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, right-click your application and select Edit Health Checks . Alternatively, in the side panel, click the Actions drop-down list and select Edit Health Checks . In the Edit Health Checks page: To remove a previously added health probe, click the minus sign adjoining it. To edit the parameters of an existing probe: Click the Edit Probe link to a previously added probe to see the parameters for the probe. Modify the parameters as required, and click the check mark to save your changes. To add a new health probe, in addition to existing health checks, click the add probe links. For example, to add a Liveness probe that checks if your container is running: Click Add Liveness Probe , to see a form containing the parameters for the probe. Edit the probe parameters as required. Note The Timeout value must be lower than the Period value. The Timeout default value is 1 . The Period default value is 10 . Click the check mark at the bottom of the form. The Liveness Probe Added message is displayed. Click Save to save your modifications and add the additional probes to your container. You are redirected to the Topology view. In the side panel, verify that the probes have been added by clicking on the deployed pod under the Pods section. In the Pod Details page, click the listed container in the Containers section. In the Container Details page, verify that the Liveness probe - HTTP Get 10.129.4.65:8080/ has been added to the container, in addition to the earlier existing probes. 11.6. Monitoring health check failures using the Developer perspective In case an application health check fails, you can use the Topology view to monitor these health check violations. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, click on the application node to see the side panel. Click the Observe tab to see the health check failures in the Events (Warning) section. Click the down arrow adjoining Events (Warning) to see the details of the health check failure. Additional resources For details on switching to the Developer perspective in the web console, see About the Developer perspective . For details on adding health checks while creating and deploying an application, see Advanced Options in the Creating applications using the Developer perspective section.
[ "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8", "kind: Deployment apiVersion: apps/v1 spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19", "oc create -f <file-name>.yaml", "oc describe pod health-check", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"k8s.gcr.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"k8s.gcr.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container", "oc describe pod pod1", ". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"k8s.gcr.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 244.116568ms" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/building_applications/application-health
Chapter 6. References
Chapter 6. References This chapter enumerates other references for more information about SystemTap. It is advisable that you see these sources in the course of writing advanced probes and tapsets. SystemTap Wiki The SystemTap Wiki is a collection of links and articles related to the deployment, usage, and development of SystemTap. You can find it at http://sourceware.org/systemtap/wiki/HomePage . SystemTap Tutorial Much of the content in this book comes from the SystemTap Tutorial . The SystemTap Tutorial is a more appropriate reference for users with intermediate to advanced knowledge of C++ and kernel development, and can be found at http://sourceware.org/systemtap/tutorial/ . man stapprobes The stapprobes man page enumerates a variety of probe points supported by SystemTap, along with additional aliases defined by the SystemTap tapset library. The bottom of the man page includes a list of other man pages enumerating similar probe points for specific system components, such as stapprobes.scsi , stapprobes.kprocess , stapprobes.signal , etc. man stapfuncs The stapfuncs man page enumerates numerous functions supported by the SystemTap tapset library, along with the prescribed syntax for each one. Note, however, that this is not a complete list of all supported functions; there are more undocumented functions available. SystemTap Language Reference This document is a comprehensive reference of SystemTap's language constructs and syntax. It is recommended for users with a rudimentary to intermediate knowledge of C++ and other similar programming languages. The SystemTap Language Reference is available to all users at http://sourceware.org/systemtap/langref/ Tapset Developers Guide Once you have sufficient proficiency in writing SystemTap scripts, you can then try your hand out on writing your own tapsets. The Tapset Developers Guide describes how to add functions to your tapset library. Test Suite The systemtap-testsuite package allows you to test the entire SystemTap toolchain without having to build from source. In addition, it also contains numerous examples of SystemTap scripts you can study and test; some of these scripts are also documented in Chapter 4, Useful SystemTap Scripts . By default, the example scripts included in systemtap-testsuite are located in /usr/share/systemtap/testsuite/systemtap.examples .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/references
E.9. Model Extension Definition Editor
E.9. Model Extension Definition Editor The MED Editor is a editor with multiple tabs and is used to create and edit user defined MEDs ( *.mxd files) in the workspace. The MED Editor has 3 sub-editors (Overview, Properties, and Source) which share a common header section. Here are the MED sub-editor tabs: Overview Sub-Editor - this editor is where the general MED information is managed. This information includes the namespace prefix, namespace URI, extended model class, and the description. The Overview sub-editor looks like this: Figure E.25. Overview Tab Properties Sub-Editor - this editor is where the MED extension properties are managed. Each extension property must be associated with a model object type. The Properties sub-editor is divided into 2 sections (Extended Model Objects and Extension Properties) and looks like this: Figure E.26. Properties Tab Source - this tab is a read-only XML source viewer to view the details of your MED. This source viewer is NOT editable. The GUI components on the Overview and Properties sub-editors will be decorated with an error icon when the data in that GUI component has a validation error. Hovering over an error decoration displays a tooltip with the specific error message. Those error message relate to the error messages shown in the common header section. Here is an example of the error decoration: Figure E.27. Text Field With Error The MED sub-editors share a header section. The header is composed of the following: Status Image - an image indicating the most severe validation message (error, warning, or info). If there are no validation messages the model extension image is shown. Title - the title of the sub-editor being shown. Menu - a drop-down menu containing actions for (1) adding to and updating the MED in the registry, and (2) for showing the Model Extension Registry View. Validation Message - this area will display an OK message or an error summary message. When a summary message is shown, the tooltip for that message will enumerate all the messages. Toolbar - contains the same actions as the drop-down menu. Below is an example of the shared header section which includes an error message tooltip. Figure E.28. Shared Header Example
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/Model_Extension_Definition_Editor
5.140. latencytop
5.140. latencytop 5.140.1. RHBA-2012:0864 - latencytop bug fix and enhancement update Updated latencytop packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. LatencyTOP is a tool to monitor system latencies. Bug Fix BZ# 633698 When running LatencyTOP as a normal user, LatencyTOP attempted and failed to mount the debug file system. A misleading error message was displayed, suggesting that kernel-debug be installed even though this was already the running kernel. LatencyTOP has been improved to exit and display "Permission denied" when run as a normal user. In addition, fsync view has been removed from the "latencytop" package because it depended on a non-standard kernel tracer that was never present in Red Hat Enterprise Linux kernels or upstream kernels. As a result, LatencyTOP no longer attempts to mount the debugfs file system. Enhancement BZ# 726476 The "latencytop" package requires GTK libraries. Having GTK libraries installed on servers may be undesirable. A build of LatencyTOP without dependencies on GTK libraries is now available under the package name "latencytop-tui". Users are advised to upgrade to these updated latencytop packages, which fix this bug and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/latencytop
7.102. libcgroup
7.102. libcgroup 7.102.1. RHBA-2015:1263 - libcgroup bug fix update Updated libcgroup packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The libcgroup packages provide tools and libraries to control and monitor control groups. Bug Fixes BZ# 1036355 Previously, the cgconfigparser utility wrote the whole multi-line value in a single write() function call, while the 'devices' kernel subsystem expected only one line per write(). Consequently, cgconfigparser did not properly set the multi-line variables. The underlying source code has been fixed, and cgconfigparser now parses all variables as intended. BZ# 1139205 Prior to this update, if '/etc/cgfconfig.conf' or a configuration file in the '/etc/cgconfig.d/' directory contained the cgroup name 'default' that was not enclosed in double quotation marks, backwards compatibility was broken and cgconfigparser failed to parse the file. With this update, 'default' without double quotation marks is again considered a valid cgroup name, and configuration files are now parsed correctly. Users of libcgroup are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-libcgroup
Chapter 10. Grouping Load-balancing service objects by using tags
Chapter 10. Grouping Load-balancing service objects by using tags Tags are are arbitrary strings that you can add to Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) objects for the purpose of classifying them into groups. Tags do not affect the functionality of load-balancing objects: load balancers, listeners, pools, members, health monitors, rules, and polices. You can add a tag when you create the object, or add or remove a tag after the object has been created. By associating a particular tag with load-balancing objects, you can run list commands to filter objects that belong to one or more groups. Being able to filter objects into one or more groups can be a starting point in managing usage, allocation, and maintenance of your load-balancing service resources. The ability to tag objects can also be leveraged by automated configuration management tools. The topics included in this section are: Section 10.1, "Adding tags when creating Load-balancing service objects" Section 10.2, "Adding or removing tags on pre-existing Load-balancing service objects" Section 10.3, "Filtering Load-balancing service objects by using tags" 10.1. Adding tags when creating Load-balancing service objects You can add a tag of your choice when you create a Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) object. When the tags are in place, you can filter load balancers, listeners, pools, members, health monitors, rules, and policies by using their respective loadbalancer list commands. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Add a tag to a load-balancing object when you create it by using the --tag <tag> option with the appropriate create command for the object: openstack loadbalancer create --tag <tag> ... openstack loadbalancer listener create --tag <tag> ... openstack loadbalancer pool create --tag <tag> ... openstack loadbalancer member create --tag <tag> ... openstack loadbalancer healthmonitor create --tag <tag> ... openstack loadbalancer l7policy create --tag <tag> ... openstack loadbalancer l7rule create --tag <tag> ... Note A tag can be any valid unicode string with a maximum length of 255 characters. Example - creating and tagging a load balancer In this example a load balancer, lb1 , is created with two tags, Finance and Sales : Note Load-balancing service objects can have one or more tags. Repeat the --tag <tag> option for each additional tag that you want to add. Example - creating and tagging a listener In this example a listener, listener1 , is created with a tag, Sales : Example - creating and tagging a pool In this example a pool, pool1 , is created with a tag, Sales : Example - creating a member in a pool and tagging it In this example a member, 192.0.2.10 , is created in pool1 with a tag, Sales : Example - creating and tagging a health monitor In this example a health monitor, healthmon1 , is created with a tag, Sales : Example - creating and tagging an L7 policy In this example an L7 policy, policy1 , is created with a tag, Sales : Example - creating and tagging an L7 rule In this example an L7 rule, rule1 , is created with a tag, Sales : Verification Confirm that object that you created exists, and contains the tag that you added by using the appropriate show command for the object. Example In this example, the show command is run on the loadbalancer, lb1 : Sample output 10.2. Adding or removing tags on pre-existing Load-balancing service objects You can add and remove tags of your choice on Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) objects after they have been created. When the tags are in place, you can filter load balancers, listeners, pools, members, health monitors, rules, and polices by using the their respective loadbalancer list commands. You can create a new security group to apply to instances and ports within a project in a RHOSO environment. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Do one of the following: Add a tag to a pre-existing load-balancing object by using the --tag <tag> option with the appropriate set command for the object: openstack loadbalancer set --tag <tag> <load_balancer_name_or_ID> openstack loadbalancer listener set --tag <tag> <listener_name_or_ID> openstack loadbalancer pool set --tag <tag> <pool_name_or_ID> openstack loadbalancer member set --tag <tag> <pool_name_or_ID> <member_name_or_ID> openstack loadbalancer healthmonitor set --tag <tag> <healthmon_name_or_ID> openstack loadbalancer l7policy set --tag <tag> <l7policy_name_or_ID> openstack loadbalancer l7rule set --tag <tag> <l7policy_name_or_ID> <l7rule_ID> Note A tag can be any valid unicode string with a maximum length of 255 characters. Example In this example the tags, Finance and Sales , are added to the load balancer, lb1 : Note Load-balancing service objects can have one or more tags. Repeat the --tag <tag> option for each additional tag that you want to add. Remove a tag from a pre-existing load-balancing object by using the --tag <tag> option with the appropriate unset command for the object: openstack loadbalancer unset --tag <tag> <load_balancer_name_or_ID> openstack loadbalancer listener unset --tag <tag> <listener_name_or_ID> openstack loadbalancer pool unset --tag <tag> <pool_name_or_ID> openstack loadbalancer member unset --tag <tag> <pool_name_or_ID> <member_name_or_ID> openstack loadbalancer healthmonitor unset --tag <tag> <healthmon_name_or_ID> openstack loadbalancer l7policy unset --tag <tag> <policy_name_or_ID> openstack loadbalancer l7rule unset --tag <tag> <policy_name_or_ID> <l7rule_ID> Example In this example, the tag, Sales , is removed from the load balancer, lb1 : Remove all tags from a pre-existing load-balancing object by using the --no-tag option with the appropriate set command for the object: openstack loadbalancer set --no-tag <load_balancer_name_or_ID> openstack loadbalancer listener set --no-tag <listener_name_or_ID> openstack loadbalancer pool set --no-tag <pool_name_or_ID> openstack loadbalancer member set --no-tag <pool_name_or_ID> <member_name_or_ID> openstack loadbalancer healthmonitor set --no-tag <healthmon_name_or_ID> openstack loadbalancer l7policy set --no-tag <l7policy_name_or_ID> openstack loadbalancer l7rule set --no-tag <l7policy_name_or_ID> <l7rule_ID> Example In this example, all tags are removed from the load balancer, lb1 : Verification Confirm that you have added or removed one or more tags on the load-balancing object, by using the appropriate show command for the object. Example In this example, the show command is run on the loadbalancer, lb1 : Sample output 10.3. Filtering Load-balancing service objects by using tags You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) to create lists of objects. For the objects that are tagged, you can create filtered lists: lists that include or exclude objects based on whether your objects contain one or more of the specified tags. Being able to filter load balancers, listeners, pools, members, health monitors, rules, and policies using tags can be a starting point in managing usage, allocation, and maintenance of your load-balancing service resources. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Filter the objects that you want to list by running the appropriate loadbalancer list command for the objects with one of the tag options: Table 10.1. Tag options for filtering objects In my list, I want to... Examples include objects that match all specified tags. USD openstack loadbalancer list --tags Sales,Finance USD openstack loadbalancer listener list --tags Sales,Finance USD openstack loadbalancer l7pool list --tags Sales,Finance USD openstack loadbalancer member list --tags Sales,Finance pool1 USD openstack loadbalancer healthmonitor list --tags Sales,Finance USD openstack loadbalancer l7policy list --tags Sales,Finance USD openstack loadbalancer l7rule list --tags Sales,Finance policy1 include objects that match one or more specified tags. USD openstack loadbalancer list --any-tags Sales,Finance USD openstack loadbalancer listener list --any-tags Sales,Finance USD openstack loadbalancer l7pool list --any-tags Sales,Finance USD openstack loadbalancer member list --any-tags Sales,Finance pool1 USD openstack loadbalancer healthmonitor list --any-tags Sales,Finance USD openstack loadbalancer l7policy list --any-tags Sales,Finance USD openstack loadbalancer l7rule list --any-tags Sales,Finance policy1 exclude objects that match all specified tags. USD openstack loadbalancer list --not-tags Sales,Finance USD openstack loadbalancer listener list --not-tags Sales,Finance USD openstack loadbalancer l7pool list --not-tags Sales,Finance USD openstack loadbalancer member list --not-tags Sales,Finance pool1 USD openstack loadbalancer healthmonitor list --not-tags Sales,Finance USD openstack loadbalancer l7policy list --not-tags Sales,Finance USD openstack loadbalancer l7rule list --not-tags Sales,Finance policy1 exclude objects that match one or more specified tags. USD openstack loadbalancer list --not-any-tags Sales,Finance USD openstack loadbalancer listener list --not-any-tags Sales,Finance USD openstack loadbalancer l7pool list --not-any-tags Sales,Finance USD openstack loadbalancer member list --not-any-tags Sales,Finance pool1 USD openstack loadbalancer healthmonitor list --not-any-tags Sales,Finance USD openstack loadbalancer l7policy list --not-any-tags Sales,Finance USD openstack loadbalancer l7rule list --not-any-tags Sales,Finance policy1 Note When specifying more than one tag, separate the tags by using a comma.
[ "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --tag Finance --tag Sales", "openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 --tag Sales lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --tag Sales", "openstack loadbalancer member create --name member1 --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 --tag Sales pool1", "openstack loadbalancer healthmonitor create --name healthmon1 --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / --tag Sales pool1", "openstack loadbalancer l7policy create --action REDIRECT_PREFIX --redirect-prefix https://www.example.com/ --name policy1 http_listener --tag Sales", "openstack loadbalancer l7rule create --compare-type STARTS_WITH --type PATH --value / --tag Sales policy1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | admin_state_up | True | | availability_zone | None | | created_at | 2024-08-06T19:34:15 | | description | | | flavor_id | None | | id | 7975374b-3367-4436-ab19-2d79d8c1f29b | | listeners | | | name | lb1 | | operating_status | ONLINE | | pools | | | project_id | 2eee3b86ca404cdd977281dac385fd4e | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-08-07T13:30:17 | | vip_address | 172.24.3.76 | | vip_network_id | 4c241fc4-95eb-491a-affe-26c53a8805cd | | vip_port_id | 9978a598-cc34-47f7-ba28-49431d570fd1 | | vip_qos_policy_id | None | | vip_subnet_id | e999d323-bd0f-4469-974f-7f66d427e507 | | tags | Finance | | | Sales | +---------------------+--------------------------------------+", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack loadbalancer set --tag Finance --tag Sales lb1", "openstack loadbalancer unset --tag Sales lb1", "openstack loadbalancer set --no-tag lb1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | admin_state_up | True | | availability_zone | None | | created_at | 2024-08-06T19:34:15 | | description | | | flavor_id | None | | id | 7975374b-3367-4436-ab19-2d79d8c1f29b | | listeners | | | name | lb1 | | operating_status | ONLINE | | pools | | | project_id | 2eee3b86ca404cdd977281dac385fd4e | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-08-07T13:30:17 | | vip_address | 172.24.3.76 | | vip_network_id | 4c241fc4-95eb-491a-affe-26c53a8805cd | | vip_port_id | 9978a598-cc34-47f7-ba28-49431d570fd1 | | vip_qos_policy_id | None | | vip_subnet_id | e999d323-bd0f-4469-974f-7f66d427e507 | | tags | Finance | | | Sales | +---------------------+--------------------------------------+", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_load_balancing_as_a_service/group-lb-objects-tags_rhoso-lbaas
Chapter 18. Rebooting nodes
Chapter 18. Rebooting nodes You might need to reboot the nodes in the undercloud and overcloud. Use the following procedures to understand how to reboot different node types. If you reboot all nodes in one role, it is advisable to reboot each node individually. If you reboot all nodes in a role simultaneously, service downtime can occur during the reboot operation. If you reboot all nodes in your OpenStack Platform environment, reboot the nodes in the following sequential order: Recommended node reboot order Reboot the undercloud node. Reboot Controller and other composable nodes. Reboot standalone Ceph MON nodes. Reboot Ceph Storage nodes. Reboot Object Storage service (swift) nodes. Reboot Compute nodes. 18.1. Rebooting the undercloud node Complete the following steps to reboot the undercloud node. Procedure Log in to the undercloud as the stack user. Reboot the undercloud: Wait until the node boots. 18.2. Rebooting Controller and composable nodes Reboot Controller nodes and standalone nodes based on composable roles, and exclude Compute nodes and Ceph Storage nodes. Procedure Log in to the node that you want to reboot. Optional: If the node uses Pacemaker resources, stop the cluster: [heat-admin@overcloud-controller-0 ~]USD sudo pcs cluster stop Reboot the node: [heat-admin@overcloud-controller-0 ~]USD sudo reboot Wait until the node boots. Verfication Verify that the services are enabled. If the node uses Pacemaker services, check that the node has rejoined the cluster: [heat-admin@overcloud-controller-0 ~]USD sudo pcs status If the node uses Systemd services, check that all services are enabled: [heat-admin@overcloud-controller-0 ~]USD sudo systemctl status If the node uses containerized services, check that all containers on the node are active: [heat-admin@overcloud-controller-0 ~]USD sudo podman ps 18.3. Rebooting standalone Ceph MON nodes Complete the following steps to reboot standalone Ceph MON nodes. Procedure Log in to a Ceph MON node. Reboot the node: Wait until the node boots and rejoins the MON cluster. Repeat these steps for each MON node in the cluster. 18.4. Rebooting a Ceph Storage (OSD) cluster Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes. Procedure Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily: USD sudo podman exec -it ceph-mon-controller-0 ceph osd set noout USD sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance Note If you have a multistack or distributed compute node (DCN) architecture, you must specify the cluster name when you set the noout and norebalance flags. For example: sudo podman exec -it ceph-mon-controller-0 ceph osd set noout --cluster <cluster_name> Select the first Ceph Storage node that you want to reboot and log in to the node. Reboot the node: Wait until the node boots. Log in to the node and check the cluster status: USD sudo podman exec -it ceph-mon-controller-0 ceph status Check that the pgmap reports all pgs as normal ( active+clean ). Log out of the node, reboot the node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes. When complete, log in to a Ceph MON or Controller node and re-enable cluster rebalancing: USD sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout USD sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance Note If you have a multistack or distributed compute node (DCN) architecture, you must specify the cluster name when you unset the noout and norebalance flags. For example: sudo podman exec -it ceph-mon-controller-0 ceph osd set noout --cluster <cluster_name> Perform a final status check to verify that the cluster reports HEALTH_OK : USD sudo podman exec -it ceph-mon-controller-0 ceph status 18.5. Rebooting Compute nodes To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, the Migrating instances workflow outlines the steps you must complete to migrate instances from the Compute node that you want to reboot. Note If you do not migrate the instances from the source Compute node to another Compute node, the instances might be restarted on the source Compute node, which might cause the upgrade to fail. This is related to the known issue around changes to Podman and the libvirt service: BZ#2009106 - podman panic after tripleo_nova_libvirt restart two times BZ#2010135 - podman panic after tripleo_nova_libvirt restart two times Migrating instances workflow Decide whether to migrate instances to another Compute node before rebooting the node. Select and disable the Compute node that you want to reboot so that it does not provision new instances. Migrate the instances to another Compute node. Reboot the empty Compute node. Enable the empty Compute node. Prerequisites Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting. Review the list of migration constraints that you might encounter when you migrate virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute Service for Instance Creation . If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots: NovaResumeGuestsStateOnHostBoot Determines whether to return instances to the same state on the Compute node after reboot. When set to False , the instances remain down and you must start them manually. The default value is False . NovaResumeGuestsShutdownTimeout Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to 0 . The default value is 300 . For more information about overcloud parameters and their usage, see Overcloud Parameters . Procedure Log in to the undercloud as the stack user. List all Compute nodes and their UUIDs: USD source ~/stackrc (undercloud) USD openstack server list --name compute Identify the UUID of the Compute node that you want to reboot. From the undercloud, select a Compute node and disable it: USD source ~/overcloudrc (overcloud) USD openstack compute service list (overcloud) USD openstack compute service set <hostname> nova-compute --disable List all instances on the Compute node: (overcloud) USD openstack server list --host <hostname> --all-projects Optional: If you decide to migrate the instances to another Compute node, complete the following steps: If you decide to migrate the instances to another Compute node, use one of the following commands: To migrate the instance to a different host, run the following command: (overcloud) USD openstack server migrate <instance_id> --live <target_host> --wait Let nova-scheduler automatically select the target host: (overcloud) USD nova live-migration <instance_id> Live migrate all instances at once: USD nova host-evacuate-live <hostname> Note The nova command might cause some deprecation warnings, which are safe to ignore. Wait until migration completes. Confirm that the migration was successful: (overcloud) USD openstack server list --host <hostname> --all-projects Continue to migrate instances until none remain on the Compute node. Log in to the Compute node and reboot the node: [heat-admin@overcloud-compute-0 ~]USD sudo reboot Wait until the node boots. Re-enable the Compute node: USD source ~/overcloudrc (overcloud) USD openstack compute service set <hostname> nova-compute --enable Check that the Compute node is enabled: (overcloud) USD openstack compute service list
[ "sudo reboot", "[heat-admin@overcloud-controller-0 ~]USD sudo pcs cluster stop", "[heat-admin@overcloud-controller-0 ~]USD sudo reboot", "[heat-admin@overcloud-controller-0 ~]USD sudo pcs status", "[heat-admin@overcloud-controller-0 ~]USD sudo systemctl status", "[heat-admin@overcloud-controller-0 ~]USD sudo podman ps", "sudo reboot", "sudo podman exec -it ceph-mon-controller-0 ceph osd set noout sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance", "sudo reboot", "sudo podman exec -it ceph-mon-controller-0 ceph status", "sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance", "sudo podman exec -it ceph-mon-controller-0 ceph status", "source ~/stackrc (undercloud) USD openstack server list --name compute", "source ~/overcloudrc (overcloud) USD openstack compute service list (overcloud) USD openstack compute service set <hostname> nova-compute --disable", "(overcloud) USD openstack server list --host <hostname> --all-projects", "(overcloud) USD openstack server migrate <instance_id> --live <target_host> --wait", "(overcloud) USD nova live-migration <instance_id>", "nova host-evacuate-live <hostname>", "(overcloud) USD openstack server list --host <hostname> --all-projects", "[heat-admin@overcloud-compute-0 ~]USD sudo reboot", "source ~/overcloudrc (overcloud) USD openstack compute service set <hostname> nova-compute --enable", "(overcloud) USD openstack compute service list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_rebooting-nodes
Chapter 2. Upgrading Red Hat Satellite
Chapter 2. Upgrading Red Hat Satellite Use the following procedures to upgrade your existing Red Hat Satellite to Red Hat Satellite 6.16. 2.1. Satellite Server upgrade considerations This section describes how to upgrade Satellite Server from 6.15 to 6.16. You can upgrade from any minor version of Satellite Server 6.15. Before you begin Review Section 1.2, "Prerequisites" . Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Review and update your firewall configuration. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. If you have edited any of the default templates, back up the files either by cloning or exporting them. Cloning is the recommended method because that prevents them being overwritten in future updates or upgrades. To confirm if a template has been edited, you can view its History before you upgrade or view the changes in the audit log after an upgrade. In the Satellite web UI, navigate to Monitor > Audits and search for the template to see a record of changes made. If you use the export method, restore your changes by comparing the exported template and the default template, manually applying your changes. Optional: Clone your Satellite Server to test the upgrade. After you successfully test the upgrade on the clone, you can repeat the upgrade on your primary Satellite Server and discard the clone, or you can promote the clone to your primary Satellite Server and discard the primary Satellite Server. For more information, see Cloning Satellite Server in Administering Red Hat Satellite . Capsule considerations If you use content views to control updates to a Capsule Server's base operating system, or for Capsule Server repository, you must publish updated versions of those content views. Note that Satellite Server upgraded from 6.15 to 6.16 can use Capsule Servers still at 6.15. Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. FIPS mode You cannot upgrade Satellite Server from a RHEL base system that is not operating in FIPS mode to a RHEL base system that is operating in FIPS mode. To run Satellite Server on a Red Hat Enterprise Linux base system operating in FIPS mode, you must install Satellite on a freshly provisioned RHEL base system operating in FIPS mode. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . 2.2. Upgrading a disconnected Satellite Server Use this procedure if your Satellite Server is not connected to the Red Hat Content Delivery Network. Warning If you customized configuration files, either manually or using a tool such as Hiera, these changes are overwritten when you enter the satellite-maintain command during upgrading or updating. You can use the --noop option with the satellite-installer command to review the changes that are applied during upgrading or updating. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade . The hammer import and export commands have been replaced with hammer content-import and hammer content-export tooling. If you have scripts that are using hammer content-view version export , hammer content-view version export-legacy , hammer repository export , or their respective import commands, you have to adjust them to use the hammer content-export command instead, along with its respective import command. If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Before you begin Review and update your firewall configuration before upgrading your Satellite Server. For more information, see Port and firewall requirements in Installing Satellite Server in a disconnected network environment . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. All Satellite Servers must be on the same version. Upgrade disconnected Satellite Server Stop all Satellite services: Take a snapshot or create a backup: On a virtual machine, take a snapshot. On a physical machine, create a backup. Start all Satellite services: Optional: If you made manual edits to DNS or DHCP configuration in the /etc/zones.conf or /etc/dhcp/dhcpd.conf files, back up the configuration files because the installer only supports one domain or subnet, and therefore restoring changes from these backups might be required. Optional: If you made manual edits to DNS or DHCP configuration files and do not want to overwrite the changes, enter the following command: In the Satellite web UI, navigate to Hosts > Discovered hosts . If there are discovered hosts available, turn them off and then delete all entries under the Discovered hosts page. Select all other organizations in turn using the organization setting menu and repeat this action as required. Reboot these hosts after the upgrade has completed. Remove old repositories: Obtain the latest ISO files by following the Downloading the Binary DVD Images procedure in Installing Satellite Server in a disconnected network environment . Create directories to serve as a mount point, mount the ISO images, and configure the rhel8 repository by following the Configuring the base operating system with offline repositories procedure in Installing Satellite Server in a disconnected network environment . Do not install or update any packages at this stage. Configure the Satellite 6.16 repository from the ISO file. Copy the ISO file's repository data file for the Red Hat Satellite packages: Edit the /etc/yum.repos.d/satellite.repo file: Change the default InstallMedia repository name to Satellite-6.16 : Add the baseurl directive: Configure the Red Hat Satellite Maintenance repository from the ISO file. Copy the ISO file's repository data file for Red Hat Satellite Maintenance packages: Edit the /etc/yum.repos.d/satellite-maintenance.repo file: Change the default InstallMedia repository name to Satellite-Maintenance : Add the baseurl directive: Optional: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logs in /var/log/foreman-installer/satellite.log to check if the process completed successfully. Upgrade satellite-maintain to its version: If you are using an external database, upgrade your database to PostgreSQL 13. Use the health check option to determine if the system is ready for upgrade. When prompted, enter the hammer admin user credentials to configure satellite-maintain with hammer credentials. These changes are applied to the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: If the script fails due to missing or outdated packages, you must download and install these separately. For more information, see Resolving Package Dependency Errors in Installing Satellite Server in a disconnected network environment . If the command told you to reboot, then reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups that you made. If you make changes in the step, restart Satellite services: If you have the OpenSCAP plugin installed, but do not have the default OpenSCAP content available, enter the following command. In the Satellite web UI, navigate to Configure > Discovery Rules . Associate selected organizations and locations with discovery rules. steps Optional: Upgrade the operating system to Red Hat Enterprise Linux 9 on the upgraded Satellite Server. For more information, see Chapter 3, Upgrading Red Hat Enterprise Linux on Satellite or Capsule . 2.3. Synchronizing the new repositories You must enable and synchronize the new 6.16 repositories before you can upgrade Capsule Servers and Satellite clients. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . Toggle the Recommended Repositories switch to the On position. From the list of results, expand the following repositories and click the Enable icon to enable the repositories: To upgrade Satellite clients, enable the Red Hat Satellite Client 6 repositories for all Red Hat Enterprise Linux versions that clients use. If you have Capsule Servers, to upgrade them, enable the following repositories too: Red Hat Satellite Capsule 6.16 (for RHEL 8 x86_64) (RPMs) Red Hat Satellite Maintenance 6.16 (for RHEL 8 x86_64) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - BaseOS) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - AppStream) (RPMs) Note If the 6.16 repositories are not available, refresh the Red Hat Subscription Manifest. In the Satellite web UI, navigate to Content > Subscriptions , click Manage Manifest , then click Refresh . In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the product to view the available repositories. Select the repositories for 6.16. Note that Red Hat Satellite Client 6 does not have a 6.16 version. Choose Red Hat Satellite Client 6 instead. Click Synchronize Now . Important If an error occurs when you try to synchronize a repository, refresh the manifest. If the problem persists, raise a support request. Do not delete the manifest from the Customer Portal or in the Satellite web UI; this removes all the entitlements of your content hosts. If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . 2.4. Performing post-upgrade tasks Optional: If the default provisioning templates have been changed during the upgrade, recreate any templates cloned from the default templates. If the custom code is executed before and/or after the provisioning process, use custom provisioning snippets to avoid recreating cloned templates. For more information about configuring custom provisioning snippets, see Creating Custom Provisioning Snippets in Provisioning hosts . Pulp is introducing more data about container manifests to the API. This information allows Katello to display manifest labels, annotations, and information about the manifest type, such as if it is bootable or represents flatpak content. As a result, migrations must be performed to pull this content from manifests into the database. This migration takes time, so a pre-migration runs automatically after the upgrade to 6.16 to reduce future upgrade downtime. While the pre-migration is running, Satellite Server is fully functional but uses more hardware resources. 2.5. Upgrading Capsule Servers This section describes how to upgrade Capsule Servers from 6.15 to 6.16. Before you begin Review Section 1.2, "Prerequisites" . You must upgrade Satellite Server before you can upgrade any Capsule Servers. Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Ensure the Red Hat Satellite Capsule 6.16 repository is enabled in Satellite Server and synchronized. Ensure that you synchronize the required repositories on Satellite Server. For more information, see Section 2.3, "Synchronizing the new repositories" . If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . Ensure the Capsule's base system is registered to the newly upgraded Satellite Server. Ensure the Capsule has the correct organization and location settings in the newly upgraded Satellite Server. Review and update your firewall configuration prior to upgrading your Capsule Server. For more information, see Preparing Your Environment for Capsule Installation in Installing Capsule Server . Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrading Capsule Servers Create a backup. On a virtual machine, take a snapshot. On a physical machine, create a backup. For information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Clean yum cache: Synchronize the satellite-capsule-6.16-for-rhel-8-x86_64-rpms repository in the Satellite Server. Publish and promote a new version of the content view with which the Capsule is registered. Optional: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. The rubygem-foreman_maintain is installed from the Satellite Maintenance repository or upgraded from the Satellite Maintenance repository if currently installed. Ensure Capsule has access to satellite-maintenance-6.16-for-rhel-8-x86_64-rpms and execute: On Capsule Server, verify that the foreman_url setting points to the Satellite FQDN: Use the health check option to determine if the system is ready for upgrade: Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: If the command told you to reboot, then reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups made earlier. Optional: If you use custom repositories, ensure that you enable these custom repositories after the upgrade completes. Upgrading Capsule Servers using remote execution Create a backup or take a snapshot. For more information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . From the Job category list, select Maintenance Operations . From the Job template list, select Capsule Upgrade Playbook . In the Search Query field, enter the host name of the Capsule. Ensure that Apply to 1 host is displayed in the Resolves to field. In the target_version field, enter the target version of the Capsule. In the whitelist_options field, enter the options. Select the schedule for the job execution in Schedule . In the Type of query section, click Static Query . steps Optional: Upgrade the operating system to Red Hat Enterprise Linux 9 on the upgraded Satellite Server. For more information, see Chapter 3, Upgrading Red Hat Enterprise Linux on Satellite or Capsule . 2.6. Upgrading the external database You can upgrade an external database from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 while upgrading Satellite from 6.15 to 6.16. Prerequisites Create a new Red Hat Enterprise Linux 9 based host for PostgreSQL server that follows the external database on Red Hat Enterprise Linux 9 documentation. For more information, see Using External Databases with Satellite . Install PostgreSQL version 13 on the new Red Hat Enterprise Linux host. Procedure Create a backup. Restore the backup on the new server. Correct the permissions on the evr extension: If Satellite reaches the new database server via the old name, no further changes are required. Otherwise reconfigure Satellite to use the new name:
[ "satellite-maintain service stop", "satellite-maintain service start", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dhcp-managed=false", "rm /etc/yum.repos.d/*", "cp /media/sat6/Satellite/media.repo /etc/yum.repos.d/satellite.repo", "vi /etc/yum.repos.d/satellite.repo", "[Satellite-6.16]", "baseurl=file:///media/sat6/Satellite", "cp /media/sat6/Maintenance/media.repo /etc/yum.repos.d/satellite-maintenance.repo", "vi /etc/yum.repos.d/satellite-maintenance.repo", "[Satellite-Maintenance]", "baseurl=file:///media/sat6/Maintenance/", "satellite-maintain self-upgrade --maintenance-repo-label Satellite-Maintenance", "satellite-maintain upgrade check --whitelist=\"repositories-validate,repositories-setup\"", "satellite-maintain upgrade run --whitelist=\"repositories-validate,repositories-setup\"", "reboot", "satellite-maintain service restart", "foreman-rake foreman_openscap:bulk_upload:default", "yum clean metadata", "satellite-maintain self-upgrade", "grep foreman_url /etc/foreman-proxy/settings.yml", "satellite-maintain upgrade check", "satellite-maintain upgrade run", "reboot", "runuser -l postgres -c \"psql -d foreman -c \\\"UPDATE pg_extension SET extowner = (SELECT oid FROM pg_authid WHERE rolname='foreman') WHERE extname='evr';\\\"\"", "satellite-installer --foreman-db-host newpostgres.example.com --katello-candlepin-db-host newpostgres.example.com --foreman-proxy-content-pulpcore-postgresql-host newpostgres.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_disconnected_red_hat_satellite_to_6.16/upgrading_satellite_upgrading-disconnected
Chapter 20. JoSQL
Chapter 20. JoSQL Overview The JoSQL (SQL for Java objects) language enables you to evaluate predicates and expressions in Apache Camel. JoSQL employs a SQL-like query syntax to perform selection and ordering operations on data from in-memory Java objects - however, JoSQL is not a database. In the JoSQL syntax, each Java object instance is treated like a table row and each object method is treated like a column name. Using this syntax, it is possible to construct powerful statements for extracting and compiling data from collections of Java objects. For details, see http://josql.sourceforge.net/ . Adding the JoSQL module To use JoSQL in your routes you need to add a dependency on camel-josql to your project as shown in Example 20.1, "Adding the camel-josql dependency" . Example 20.1. Adding the camel-josql dependency Static import To use the sql() static method in your application code, include the following import statement in your Java source files: Variables Table 20.1, "SQL variables" lists the variables that are accessible when using JoSQL. Table 20.1. SQL variables Name Type Description exchange org.apache.camel.Exchange The current Exchange in org.apache.camel.Message The IN message out org.apache.camel.Message The OUT message property Object the Exchange property whose key is property header Object the IN message header whose key is header variable Object the variable whose key is variable Example Example 20.2, "Route using JoSQL" shows a route that uses JoSQL. Example 20.2. Route using JoSQL
[ "<!-- Maven POM File --> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-josql</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>", "import static org.apache.camel.builder.sql.SqlBuilder.sql;", "<camelContext> <route> <from uri=\"direct:start\"/> <setBody> <language language=\"sql\">select * from MyType</language> </setBody> <to uri=\"seda:regularQueue\"/> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/sql
2.11. Post-Installation Script
2.11. Post-Installation Script Figure 2.16. Post-Installation Script You can also add commands to execute on the system after the installation is completed. If the network is properly configured in the kickstart file, the network is enabled, and the script can include commands to access resources on the network. To include a post-installation script, type it in the text area. Warning Do not include the %post command. It is added for you. For example, to change the message of the day for the newly installed system, add the following command to the %post section: Note More examples can be found in Section 1.7.1, "Examples" . 2.11.1. Chroot Environment To run the post-installation script outside of the chroot environment, click the checkbox to this option on the top of the Post-Installation window. This is equivalent to using the --nochroot option in the %post section. To make changes to the newly installed file system, within the post-installation section, but outside of the chroot environment, you must prepend the directory name with /mnt/sysimage/ . For example, if you select Run outside of the chroot environment , the example must be changed to the following:
[ "echo \"Hackers will be punished!\" > /etc/motd", "echo \"Hackers will be punished!\" > /mnt/sysimage/etc/motd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/rhkstool-post_installation_script
Chapter 23. Security
Chapter 23. Security A runtime version of OpenSSL is masked and SSL_OP_NO_TLSv1_1 must not be used when an application runs with OpenSSL 1.0.0 Because certain applications perform incorrect version check of the OpenSSL version, the actual runtime version of OpenSSL is masked and the build-time version is reported instead. Consequently, it is impossible to detect the currently running OpenSSL version using the SSLeay() function. Additionally, passing the value equivalent to the SSL_OP_NO_TLSv1_1 option as present on OpenSSL 1.0.1 to the SSL_CTX_set_options() function when running with OpenSSL 1.0.0 breaks the SSL/TLS support completely. To work around this problem, use another way to detect the currently running OpenSSL version. For example, it is possible to obtain a list of enabled ciphers with the SSL_get_ciphers() function and search a TLS 1.2 cipher by parsing the list using the SSL_CIPHER_description() function. This indicates an application that runs with the OpenSSL version later than 1.0.0 because TLS 1.2 support is present since version 1.0.1. (BZ# 1497859 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/known_issues_security
Console APIs
Console APIs OpenShift Container Platform 4.12 Reference guide for console APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/console_apis/index
Using Rust 1.79.0 Toolset
Using Rust 1.79.0 Toolset Red Hat Developer Tools 1 Installing and using Rust 1.79.0 Toolset
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.79.0_toolset/index
4.9. Encryption
4.9. Encryption 4.9.1. Using LUKS Disk Encryption Linux Unified Key Setup-on-disk-format (or LUKS) allows you to encrypt partitions on your Linux computer. This is particularly important when it comes to mobile computers and removable media. LUKS allows multiple user keys to decrypt a master key, which is used for the bulk encryption of the partition. Overview of LUKS What LUKS does LUKS encrypts entire block devices and is therefore well-suited for protecting the contents of mobile devices such as removable storage media or laptop disk drives. The underlying contents of the encrypted block device are arbitrary. This makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage. LUKS uses the existing device mapper kernel subsystem. LUKS provides passphrase strengthening which protects against dictionary attacks. LUKS devices contain multiple key slots, allowing users to add backup keys or passphrases. What LUKS does not do: LUKS is not well-suited for scenarios requiring many (more than eight) users to have distinct access keys to the same device. LUKS is not well-suited for applications requiring file-level encryption. Important Disk-encryption solutions like LUKS only protect the data when your system is off. Once the system is on and LUKS has decrypted the disk, the files on that disk are available to anyone who would normally have access to them. 4.9.1.1. LUKS Implementation in Red Hat Enterprise Linux Red Hat Enterprise Linux 7 utilizes LUKS to perform file system encryption. By default, the option to encrypt the file system is unchecked during the installation. If you select the option to encrypt your hard drive, you will be prompted for a passphrase that will be asked every time you boot the computer. This passphrase "unlocks" the bulk encryption key that is used to decrypt your partition. If you choose to modify the default partition table you can choose which partitions you want to encrypt. This is set in the partition table settings. The default cipher used for LUKS (see cryptsetup --help ) is aes-cbc-essiv:sha256 (ESSIV - Encrypted Salt-Sector Initialization Vector). Note that the installation program, Anaconda , uses by default XTS mode (aes-xts-plain64). The default key size for LUKS is 256 bits. The default key size for LUKS with Anaconda (XTS mode) is 512 bits. Ciphers that are available are: AES - Advanced Encryption Standard - FIPS PUB 197 Twofish (a 128-bit block cipher) Serpent cast5 - RFC 2144 cast6 - RFC 2612 4.9.1.2. Manually Encrypting Directories Warning Following this procedure will remove all data on the partition that you are encrypting. You WILL lose all your information! Make sure you backup your data to an external source before beginning this procedure! Enter runlevel 1 by typing the following at a shell prompt as root: Unmount your existing /home : If the command in the step fails, use fuser to find processes hogging /home and kill them: Verify /home is no longer mounted: Fill your partition with random data: This command proceeds at the sequential write speed of your device and may take some time to complete. It is an important step to ensure no unencrypted data is left on a used device, and to obfuscate the parts of the device that contain encrypted data as opposed to just random data. Initialize your partition: Open the newly encrypted device: Make sure the device is present: Create a file system: Mount the file system: Make sure the file system is visible: Add the following to the /etc/crypttab file: Edit the /etc/fstab file, removing the old entry for /home and adding the following line: Restore default SELinux security contexts: Reboot the machine: The entry in the /etc/crypttab makes your computer ask your luks passphrase on boot. Log in as root and restore your backup. You now have an encrypted partition for all of your data to safely rest while the computer is off. 4.9.1.3. Add a New Passphrase to an Existing Device Use the following command to add a new passphrase to an existing device: After being prompted for any one of the existing passprases for authentication, you will be prompted to enter the new passphrase. 4.9.1.4. Remove a Passphrase from an Existing Device Use the following command to remove a passphrase from an existing device: You will be prompted for the passphrase you want to remove and then for any one of the remaining passphrases for authentication. 4.9.1.5. Creating Encrypted Block Devices in Anaconda You can create encrypted devices during system installation. This allows you to easily configure a system with encrypted partitions. To enable block device encryption, check the Encrypt System check box when selecting automatic partitioning or the Encrypt check box when creating an individual partition, software RAID array, or logical volume. After you finish partitioning, you will be prompted for an encryption passphrase. This passphrase will be required to access the encrypted devices. If you have pre-existing LUKS devices and provided correct passphrases for them earlier in the install process the passphrase entry dialog will also contain a check box. Checking this check box indicates that you would like the new passphrase to be added to an available slot in each of the pre-existing encrypted block devices. Note Checking the Encrypt System check box on the Automatic Partitioning screen and then choosing Create custom layout does not cause any block devices to be encrypted automatically. Note You can use kickstart to set a separate passphrase for each new encrypted block device. 4.9.1.6. Additional Resources For additional information on LUKS or encrypting hard drives under Red Hat Enterprise Linux 7 visit one of the following links: LUKS home page LUKS/cryptsetup FAQ LUKS - Linux Unified Key Setup Wikipedia article HOWTO: Creating an encrypted Physical Volume (PV) using a second hard drive and pvmove 4.9.2. Creating GPG Keys GPG is used to identify yourself and authenticate your communications, including those with people you do not know. GPG allows anyone reading a GPG-signed email to verify its authenticity. In other words, GPG allows someone to be reasonably certain that communications signed by you actually are from you. GPG is useful because it helps prevent third parties from altering code or intercepting conversations and altering the message. 4.9.2.1. Creating GPG Keys in GNOME To create a GPG Key in GNOME , follow these steps: Install the Seahorse utility, which makes GPG key management easier: To create a key, from the Applications Accessories menu select Passwords and Encryption Keys , which starts the application Seahorse . From the File menu select New and then PGP Key . Then click Continue . Type your full name, email address, and an optional comment describing who you are (for example: John C. Smith, [email protected] , Software Engineer). Click Create . A dialog is displayed asking for a passphrase for the key. Choose a strong passphrase but also easy to remember. Click OK and the key is created. Warning If you forget your passphrase, you will not be able to decrypt the data. To find your GPG key ID, look in the Key ID column to the newly created key. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . You should make a backup of your private key and store it somewhere secure. 4.9.2.2. Creating GPG Keys in KDE To create a GPG Key in KDE , follow these steps: Start the KGpg program from the main menu by selecting Applications Utilities Encryption Tool . If you have never used KGpg before, the program walks you through the process of creating your own GPG keypair. A dialog box appears prompting you to create a new key pair. Enter your name, email address, and an optional comment. You can also choose an expiration time for your key, as well as the key strength (number of bits) and algorithms. Enter your passphrase in the dialog box. At this point, your key appears in the main KGpg window. Warning If you forget your passphrase, you will not be able to decrypt the data. To find your GPG key ID, look in the Key ID column to the newly created key. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . You should make a backup of your private key and store it somewhere secure. 4.9.2.3. Creating GPG Keys Using the Command Line Use the following shell command: This command generates a key pair that consists of a public and a private key. Other people use your public key to authenticate and decrypt your communications. Distribute your public key as widely as possible, especially to people who you know will want to receive authentic communications from you, such as a mailing list. A series of prompts directs you through the process. Press the Enter key to assign a default value if desired. The first prompt asks you to select what kind of key you prefer: In almost all cases, the default is the correct choice. An RSA/RSA key allows you not only to sign communications, but also to encrypt files. Choose the key size: Again, the default, 2048, is sufficient for almost all users, and represents an extremely strong level of security. Choose when the key will expire. It is a good idea to choose an expiration date instead of using the default, which is none . If, for example, the email address on the key becomes invalid, an expiration date will remind others to stop using that public key. Entering a value of 1y , for example, makes the key valid for one year. (You may change this expiration date after the key is generated, if you change your mind.) Before the gpg2 application asks for signature information, the following prompt appears: Enter y to finish the process. Enter your name and email address for your GPG key. Remember this process is about authenticating you as a real individual. For this reason, include your real name. If you choose a bogus email address, it will be more difficult for others to find your public key. This makes authenticating your communications difficult. If you are using this GPG key for self-introduction on a mailing list, for example, enter the email address you use on that list. Use the comment field to include aliases or other information. (Some people use different keys for different purposes and identify each key with a comment, such as "Office" or "Open Source Projects.") At the confirmation prompt, enter the letter O to continue if all entries are correct, or use the other options to fix any problems. Finally, enter a passphrase for your secret key. The gpg2 program asks you to enter your passphrase twice to ensure you made no typing errors. Finally, gpg2 generates random data to make your key as unique as possible. Move your mouse, type random keys, or perform other tasks on the system during this step to speed up the process. Once this step is finished, your keys are complete and ready to use: The key fingerprint is a shorthand "signature" for your key. It allows you to confirm to others that they have received your actual public key without any tampering. You do not need to write this fingerprint down. To display the fingerprint at any time, use this command, substituting your email address: Your "GPG key ID" consists of 8 hex digits identifying the public key. In the example above, the GPG key ID is 1B2AFA1C . In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . Warning If you forget your passphrase, the key cannot be used and any data encrypted using that key will be lost. 4.9.2.4. About Public Key Encryption Wikipedia - Public Key Cryptography HowStuffWorks - Encryption 4.9.3. Using openCryptoki for Public-Key Cryptography openCryptoki is a Linux implementation of PKCS#11 , which is a Public-Key Cryptography Standard that defines an application programming interface ( API ) to cryptographic devices called tokens. Tokens may be implemented in hardware or software. This chapter provides an overview of the way the openCryptoki system is installed, configured, and used in Red Hat Enterprise Linux 7. 4.9.3.1. Installing openCryptoki and Starting the Service To install the basic openCryptoki packages on your system, including a software implementation of a token for testing purposes, enter the following command as root : Depending on the type of hardware tokens you intend to use, you may need to install additional packages that provide support for your specific use case. For example, to obtain support for Trusted Platform Module ( TPM ) devices, you need to install the opencryptoki-tpmtok package. See the Installing Packages section of the Red Hat Enterprise Linux 7 System Administrator's Guide for general information on how to install packages using the Yum package manager. To enable the openCryptoki service, you need to run the pkcsslotd daemon. Start the daemon for the current session by executing the following command as root : To ensure that the service is automatically started at boot time, enter the following command: See the Managing Services with systemd chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for more information on how to use systemd targets to manage services. 4.9.3.2. Configuring and Using openCryptoki When started, the pkcsslotd daemon reads the /etc/opencryptoki/opencryptoki.conf configuration file, which it uses to collect information about the tokens configured to work with the system and about their slots. The file defines the individual slots using key-value pairs. Each slot definition can contain a description, a specification of the token library to be used, and an ID of the slot's manufacturer. Optionally, the version of the slot's hardware and firmware may be defined. See the opencryptoki.conf (5) manual page for a description of the file's format and for a more detailed description of the individual keys and the values that can be assigned to them. To modify the behavior of the pkcsslotd daemon at run time, use the pkcsconf utility. This tool allows you to show and configure the state of the daemon, as well as to list and modify the currently configured slots and tokens. For example, to display information about tokens, issue the following command (note that all non-root users that need to communicate with the pkcsslotd daemon must be a part of the pkcs11 system group): See the pkcsconf (1) manual page for a list of arguments available with the pkcsconf tool. Warning Keep in mind that only fully trusted users should be assigned membership in the pkcs11 group, as all members of this group have the right to block other users of the openCryptoki service from accessing configured PKCS#11 tokens. All members of this group can also execute arbitrary code with the privileges of any other users of openCryptoki . 4.9.4. Using Smart Cards to Supply Credentials to OpenSSH The smart card is a lightweight hardware security module in a USB stick, MicroSD, or SmartCard form factor. It provides a remotely manageable secure key store. In Red Hat Enterprise Linux 7, OpenSSH supports authentication using smart cards. To use your smart card with OpenSSH, store the public key from the card to the ~/.ssh/authorized_keys file. Install the PKCS#11 library provided by the opensc package on the client. PKCS#11 is a Public-Key Cryptography Standard that defines an application programming interface (API) to cryptographic devices called tokens. Enter the following command as root : 4.9.4.1. Retrieving a Public Key from a Card To list the keys on your card, use the ssh-keygen command. Specify the shared library (OpenSC in the following example) with the -D directive. 4.9.4.2. Storing a Public Key on a Server To enable authentication using a smart card on a remote server, transfer the public key to the remote server. Do it by copying the retrieved string (key) and pasting it to the remote shell, or by storing your key to a file ( smartcard.pub in the following example) and using the ssh-copy-id command: Storing a public key without a private key file requires to use the SSH_COPY_ID_LEGACY=1 environment variable or the -f option. 4.9.4.3. Authenticating to a Server with a Key on a Smart Card OpenSSH can read your public key from a smart card and perform operations with your private key without exposing the key itself. This means that the private key does not leave the card. To connect to a remote server using your smart card for authentication, enter the following command and enter the PIN protecting your card: Replace the hostname with the actual host name to which you want to connect. To save unnecessary typing time you connect to the remote server, store the path to the PKCS#11 library in your ~/.ssh/config file: Connect by running the ssh command without any additional options: 4.9.4.4. Using ssh-agent to Automate PIN Logging In Set up environmental variables to start using ssh-agent . You can skip this step in most cases because ssh-agent is already running in a typical session. Use the following command to check whether you can connect to your authentication agent: To avoid writing your PIN every time you connect using this key, add the card to the agent by running the following command: To remove the card from ssh-agent , use the following command: Note FIPS 201-2 requires explicit user action by the Personal Identity Verification (PIV) cardholder as a condition for use of the digital signature key stored on the card. OpenSC correctly enforces this requirement. However, for some applications it is impractical to require the cardholder to enter the PIN for each signature. To cache the smart card PIN, remove the # character before the pin_cache_ignore_user_consent = true; option in the /etc/opensc-x86_64.conf . See the Cardholder Authentication for the PIV Digital Signature Key (NISTIR 7863) report for more information. 4.9.4.5. Additional Resources Setting up your hardware or software token is described in the Smart Card support in Red Hat Enterprise Linux 7 article. For more information about the pkcs11-tool utility for managing and using smart cards and similar PKCS#11 security tokens, see the pkcs11-tool(1) man page. 4.9.5. Trusted and Encrypted Keys Trusted and encrypted keys are variable-length symmetric keys generated by the kernel that utilize the kernel keyring service. The fact that the keys never appear in user space in an unencrypted form means that their integrity can be verified, which in turn means that they can be used, for example, by the extended verification module ( EVM ) to verify and confirm the integrity of a running system. User-level programs can only ever access the keys in the form of encrypted blobs . Trusted keys need a hardware component: the Trusted Platform Module ( TPM ) chip, which is used to both create and encrypt ( seal ) the keys. The TPM seals the keys using a 2048-bit RSA key called the storage root key ( SRK ). In addition to that, trusted keys may also be sealed using a specific set of the TPM 's platform configuration register ( PCR ) values. The PCR contains a set of integrity-management values that reflect the BIOS , boot loader, and operating system. This means that PCR -sealed keys can only be decrypted by the TPM on the exact same system on which they were encrypted. However, once a PCR -sealed trusted key is loaded (added to a keyring), and thus its associated PCR values are verified, it can be updated with new (or future) PCR values, so that a new kernel, for example, can be booted. A single key can also be saved as multiple blobs, each with different PCR values. Encrypted keys do not require a TPM , as they use the kernel AES encryption, which makes them faster than trusted keys. Encrypted keys are created using kernel-generated random numbers and encrypted by a master key when they are exported into user-space blobs. This master key can be either a trusted key or a user key, which is their main disadvantage - if the master key is not a trusted key, the encrypted key is only as secure as the user key used to encrypt it. 4.9.5.1. Working with keys Before performing any operations with the keys, ensure that the trusted and encrypted-keys kernel modules are loaded in the system. Consider the following points while loading the kernel modules in different RHEL kernel architectures: For RHEL kernels with the x86_64 architecture, the TRUSTED_KEYS and ENCRYPTED_KEYS code is built in as a part of the core kernel code. As a result, the x86_64 system users can use these keys without loading the trusted and encrypted-keys modules. For all other architectures, it is necessary to load the trusted and encrypted-keys kernel modules before performing any operations with the keys. To load the kernel modules, execute the following command: The trusted and encrypted keys can be created, loaded, exported, and updated using the keyctl utility. For detailed information about using keyctl , see keyctl (1) . Note In order to use a TPM (such as for creating and sealing trusted keys), it needs to be enabled and active. This can be usually achieved through a setting in the machine's BIOS or using the tpm_setactive command from the tpm-tools package of utilities. Also, the TrouSers application needs to be installed (the trousers package), and the tcsd daemon, which is a part of the TrouSers suite, running to communicate with the TPM . To create a trusted key using a TPM , execute the keyctl command with the following syntax: ~]USD keyctl add trusted name "new keylength [ options ]" keyring Using the above syntax, an example command can be constructed as follows: The above example creates a trusted key called kmk with the length of 32 bytes (256 bits) and places it in the user keyring ( @u ). The keys may have a length of 32 to 128 bytes (256 to 1024 bits). Use the show subcommand to list the current structure of the kernel keyrings: The print subcommand outputs the encrypted key to the standard output. To export the key to a user-space blob, use the pipe subcommand as follows: To load the trusted key from the user-space blob, use the add command again with the blob as an argument: The TPM -sealed trusted key can then be employed to create secure encrypted keys. The following command syntax is used for generating encrypted keys: ~]USD keyctl add encrypted name "new [ format ] key-type : master-key-name keylength " keyring Based on the above syntax, a command for generating an encrypted key using the already created trusted key can be constructed as follows: To create an encrypted key on systems where a TPM is not available, use a random sequence of numbers to generate a user key, which is then used to seal the actual encrypted keys. Then generate the encrypted key using the random-number user key: The list subcommand can be used to list all keys in the specified kernel keyring: Important Keep in mind that encrypted keys that are not sealed by a master trusted key are only as secure as the user master key (random-number key) used to encrypt them. Therefore, the master user key should be loaded as securely as possible and preferably early during the boot process. 4.9.5.2. Additional Resources The following offline and online resources can be used to acquire additional information pertaining to the use of trusted and encrypted keys. Installed Documentation keyctl (1) - Describes the use of the keyctl utility and its subcommands. Online Documentation Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services, such as the Apache HTTP Server . https://www.kernel.org/doc/Documentation/security/keys-trusted-encrypted.txt - The official documentation about the trusted and encrypted keys feature of the Linux kernel. See Also Section A.1.1, "Advanced Encryption Standard - AES" provides a concise description of the Advanced Encryption Standard . Section A.2, "Public-key Encryption" describes the public-key cryptographic approach and the various cryptographic protocols it uses. 4.9.6. Using the Random Number Generator In order to be able to generate secure cryptographic keys that cannot be easily broken, a source of random numbers is required. Generally, the more random the numbers are, the better the chance of obtaining unique keys. Entropy for generating random numbers is usually obtained from computing environmental "noise" or using a hardware random number generator . The rngd daemon, which is a part of the rng-tools package, is capable of using both environmental noise and hardware random number generators for extracting entropy. The daemon checks whether the data supplied by the source of randomness is sufficiently random and then stores it in the random-number entropy pool of the kernel. The random numbers it generates are made available through the /dev/random and /dev/urandom character devices. The difference between /dev/random and /dev/urandom is that the former is a blocking device, which means it stops supplying numbers when it determines that the amount of entropy is insufficient for generating a properly random output. Conversely, /dev/urandom is a non-blocking source, which reuses the entropy pool of the kernel and is thus able to provide an unlimited supply of pseudo-random numbers, albeit with less entropy. As such, /dev/urandom should not be used for creating long-term cryptographic keys. To install the rng-tools package, issue the following command as the root user: To start the rngd daemon, execute the following command as root : To query the status of the daemon, use the following command: To start the rngd daemon with optional parameters, execute it directly. For example, to specify an alternative source of random-number input (other than /dev/hwrandom ), use the following command: The command starts the rngd daemon with /dev/hwrng as the device from which random numbers are read. Similarly, you can use the -o (or --random-device ) option to choose the kernel device for random-number output (other than the default /dev/random ). See the rngd (8) manual page for a list of all available options. To check which sources of entropy are available in a given system, execute the following command as root : Note After entering the rngd -v command, the according process continues running in background. The -b, --background option (become a daemon) is applied by default. If there is not any TPM device present, you will see only the Intel Digital Random Number Generator (DRNG) as a source of entropy. To check if your CPU supports the RDRAND processor instruction, enter the following command: Note For more information and software code examples, see Intel Digital Random Number Generator (DRNG) Software Implementation Guide. The rng-tools package also contains the rngtest utility, which can be used to check the randomness of data. To test the level of randomness of the output of /dev/random , use the rngtest tool as follows: A high number of failures shown in the output of the rngtest tool indicates that the randomness of the tested data is insufficient and should not be relied upon. See the rngtest (1) manual page for a list of options available for the rngtest utility. Red Hat Enterprise Linux 7 introduced the virtio RNG (Random Number Generator) device that provides KVM virtual machines with access to entropy from the host machine. With the recommended setup, hwrng feeds into the entropy pool of the host Linux kernel (through /dev/random ), and QEMU will use /dev/random as the source for entropy requested by guests. Figure 4.1. The virtio RNG device Previously, Red Hat Enterprise Linux 7.0 and Red Hat Enterprise Linux 6 guests could make use of the entropy from hosts through the rngd user space daemon. Setting up the daemon was a manual step for each Red Hat Enterprise Linux installation. With Red Hat Enterprise Linux 7.1, the manual step has been eliminated, making the entire process seamless and automatic. The use of rngd is now not required and the guest kernel itself fetches entropy from the host when the available entropy falls below a specific threshold. The guest kernel is then in a position to make random numbers available to applications as soon as they request them. The Red Hat Enterprise Linux installer, Anaconda , now provides the virtio-rng module in its installer image, making available host entropy during the Red Hat Enterprise Linux installation. Important To correctly decide which random number generator you should use in your scenario, see the Understanding the Red Hat Enterprise Linux random number generator interface article.
[ "telinit 1", "umount /home", "fuser -mvk /home", "grep home /proc/mounts", "shred -v --iterations=1 /dev/VG00/LV_home", "cryptsetup --verbose --verify-passphrase luksFormat /dev/VG00/LV_home", "cryptsetup luksOpen /dev/VG00/LV_home home", "ls -l /dev/mapper | grep home", "mkfs.ext3 /dev/mapper/home", "mount /dev/mapper/home /home", "df -h | grep home", "home /dev/VG00/LV_home none", "/dev/mapper/home /home ext3 defaults 1 2", "/sbin/restorecon -v -R /home", "shutdown -r now", "cryptsetup luksAddKey device", "cryptsetup luksRemoveKey device", "~]# yum install seahorse", "~]USD gpg2 --gen-key", "Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection?", "RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048)", "Please specify how long the key should be valid. 0 = key does not expire d = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years key is valid for? (0)", "Is this correct (y/N)?", "pub 1024D/1B2AFA1C 2005-03-31 John Q. Doe <[email protected]> Key fingerprint = 117C FE83 22EA B843 3E86 6486 4320 545E 1B2A FA1C sub 1024g/CEA4B22E 2005-03-31 [expires: 2006-03-31]", "~]USD gpg2 --fingerprint [email protected]", "~]# yum install opencryptoki", "~]# systemctl start pkcsslotd", "~]# systemctl enable pkcsslotd", "~]USD pkcsconf -t", "~]# yum install opensc", "~]USD ssh-keygen -D /usr/lib64/pkcs11/opensc-pkcs11.so ssh-rsa AAAAB3NzaC1yc[...]+g4Mb9", "~]USD ssh-copy-id -f -i smartcard.pub user@hostname user@hostname's password: Number of key(s) added: 1 Now try logging into the machine, with: \"ssh user@hostname\" and check to make sure that only the key(s) you wanted were added.", "[localhost ~]USD ssh -I /usr/lib64/pkcs11/opensc-pkcs11.so hostname Enter PIN for 'Test (UserPIN)': [hostname ~]USD", "Host hostname PKCS11Provider /usr/lib64/pkcs11/opensc-pkcs11.so", "[localhost ~]USD ssh hostname Enter PIN for 'Test (UserPIN)': [hostname ~]USD", "~]USD ssh-add -l Could not open a connection to your authentication agent. ~]USD eval `ssh-agent`", "~]USD ssh-add -s /usr/lib64/pkcs11/opensc-pkcs11.so Enter PIN for 'Test (UserPIN)': Card added: /usr/lib64/pkcs11/opensc-pkcs11.so", "~]USD ssh-add -e /usr/lib64/pkcs11/opensc-pkcs11.so Card removed: /usr/lib64/pkcs11/opensc-pkcs11.so", "~]# modprobe trusted encrypted-keys", "~]USD keyctl add trusted kmk \"new 32\" @u 642500861", "~]USD keyctl show Session Keyring -3 --alswrv 500 500 keyring: _ses 97833714 --alswrv 500 -1 \\_ keyring: _uid.1000 642500861 --alswrv 500 500 \\_ trusted: kmk", "~]USD keyctl pipe 642500861 > kmk.blob", "~]USD keyctl add trusted kmk \"load `cat kmk.blob`\" @u 268728824", "~]USD keyctl add encrypted encr-key \"new trusted:kmk 32\" @u 159771175", "~]USD keyctl add user kmk-user \"`dd if=/dev/urandom bs=1 count=32 2>/dev/null`\" @u 427069434", "~]USD keyctl add encrypted encr-key \"new user:kmk-user 32\" @u 1012412758", "~]USD keyctl list @u 2 keys in keyring: 427069434: --alswrv 1000 1000 user: kmk-user 1012412758: --alswrv 1000 1000 encrypted: encr-key", "~]# yum install rng-tools", "~]# systemctl start rngd", "~]# systemctl status rngd", "~]# rngd --rng-device= /dev/hwrng", "~]# rngd -vf Unable to open file: /dev/tpm0 Available entropy sources: DRNG", "~]USD cat /proc/cpuinfo | grep rdrand", "~]USD cat /dev/random | rngtest -c 1000 rngtest 5 Copyright (c) 2004 by Henrique de Moraes Holschuh This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. rngtest: starting FIPS tests rngtest: bits received from input: 20000032 rngtest: FIPS 140-2 successes: 998 rngtest: FIPS 140-2 failures: 2 rngtest: FIPS 140-2(2001-10-10) Monobit: 0 rngtest: FIPS 140-2(2001-10-10) Poker: 0 rngtest: FIPS 140-2(2001-10-10) Runs: 0 rngtest: FIPS 140-2(2001-10-10) Long run: 2 rngtest: FIPS 140-2(2001-10-10) Continuous run: 0 rngtest: input channel speed: (min=1.171; avg=8.453; max=11.374)Mibits/s rngtest: FIPS tests speed: (min=15.545; avg=143.126; max=157.632)Mibits/s rngtest: Program run time: 2390520 microseconds" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Encryption
Chapter 2. Managing repositories to build your customized operating systems
Chapter 2. Managing repositories to build your customized operating systems You can define your customized repositories with third-party content without having to manage their lifecycle. You can use your third-party content to build an image, and when you launch that image to the public cloud environment, you can use those repositories with the dnf tool. 2.1. Adding a custom repository Define your repository to be able to add packages from this repository to your customized images. Prerequisites You have a RHEL subscription. You have administrator access to the Red Hat Hybrid Cloud Console web user interface or repository administrator role. You have the URL link to your repository content. Procedure Access Hybrid Cloud Console , click Services Red Hat Enterprise Linux Content Repositories . Click Add repositories . The Add custom repositories wizard opens. In the Name field provide a name for your custom repository. In the Repository type , select: Snapshotting Enables creating a daily snapshot of this repository. That enables you to create Image Blueprints with consistent repository content. Introspect only Disables snapshots for this repository. Upload Enables uploading packages to your custom repository. The file must have an rpm extension. Note, the Upload option is available only in the Preview mode. If you selected Snapshotting or Introspect only , in the URL field, provide the URL to your repository. Optional: In the Restrict architecture drop-down menu, select an architecture. You can allow all the architectures or restrict it to your system architecture to prevent incorrect repositories availability. Optional: In the Restrict OS version drop-down menu, select an operating system (OS). You can allow all the RHEL versions or select one for your system version to prevent incorrect repositories being available. Optional: Disable Modularity filtering option. When the Modularity filtering option is disabled, you can update packages in this repository even if the packages are part of a module. Optional: In the GPG key field, upload the .txt file with a GPG key or paste the URL or value of an existing GPG key. The GPG key can be used to verify the signed packages of a repository. If you do not provide the GPG key for a repository, your system cannot perform the verification. If you selected Snapshotting or Introspect only , click Save . The Red Hat Hybrid Cloud Console validates the project status. If your repository is marked as Invalid , check the repository URL that you added. For details about the repository status, see Repository status section. If you selected Upload : Click Save and upload content . The Upload content window opens. Click Upload , select the rpm files you want to upload, and click Open . Click Confirm changes when your file uploading is complete. Verification Open the list of custom repositories and verify that the repository you added is listed. 2.2. Modifying a custom repository You can modify a custom repository when you need to update information for that repository. Prerequisites You have a RHEL subscription. You have administrator access to the Red Hat Hybrid Cloud Console web user interface or repository administrator role. Procedure Access Hybrid Cloud Console , click Services Red Hat Enterprise Linux Content Repositories . Find a repository you want to modify and click Edit in the Options menu. In the Edit custom repository wizard, modify the information you need. Click Save changes . 2.3. Removing a custom repository When you no longer need a custom repository you can delete it. Prerequisites You have a RHEL subscription. You have administrator access to the Red Hat Hybrid Cloud Console web user interface or repository administrator role. Procedure Access Hybrid Cloud Console , click Services Red Hat Enterprise Linux Content Repositories . Find a repository to delete and click Delete in the Options menu. Verification Open the list of custom repositories, and verify that the repository no longer exists. 2.4. Adding existing repositories from popular repositories to custom repositories The Red Hat Hybrid Cloud Console has pre-configured repositories that you can use to build your customized RHEL image. Prerequisites You have a RHEL subscription. You have administrator access to the Red Hat Hybrid Cloud Console web user interface or repository administrator role. Procedure Access Hybrid Cloud Console , click Services Red Hat Enterprise Linux Content Repositories . On the Custom repositories page click the Popular repositories tab. Search for the repository you want to add and click Add . Verification Select the Your repositories tab and verify that the new repository is displayed in the list of custom repositories. 2.5. Removing snapshots from a repository You can delete snapshots from your custom repository to avoid broken functionality or security vulnerabilities that the old content might introduce. Important Snapshots get removed automatically after 365 days unless there is no newer snapshot of this repository. If a repository has multiple snapshots and the snapshot for removal is used in a content template, this snapshot will be replaced with the newer snapshot in the content template. Prerequisites You have a RHEL subscription. You have administrator access to the Red Hat Hybrid Cloud Console web user interface or repository administrator role. You have added a custom repository. See Adding a custom repository . Procedure Access Hybrid Cloud Console , click Services Red Hat Enterprise Linux Content Repositories . In the Your repositories tab, find the repository containing the snapshot to be removed, and click View all snapshots in the Option menu. In the Snapshot window, select all snapshots that you want to remove from this repository, and click Remove selected snapshots . In the Remove snapshot window, confirm the removal of the selected snapshots and click Remove . 2.6. Updating custom repository after changes When you make changes to your repository you can trigger a refresh of that repository in the Red Hat Hybrid Cloud Console. Prerequisites You have a RHEL subscription. You have administrator access to the Red Hat Hybrid Cloud Console web user interface or repository administrator role. You updated your custom repository. Procedure Access Hybrid Cloud Console , click Services Red Hat Enterprise Linux Content Repositories . Find a repository you want to modify and click Introspect Now in the Options menu. The status of that repository changes to In progress that indicates the Hybrid Cloud Console is connecting to the repository and checking for changes. The Red Hat Hybrid Cloud Console checks the status of the repositories every 24 hours and again every 8 hours if the status check fails. 2.7. Repository status in the Red Hat Hybrid Cloud Console The repository status shows if the repository is available. The Red Hat Hybrid Cloud Console checks the repository status periodically and can change it. The following table describes the repository status in the Red Hat Hybrid Cloud Console. Table 2.1. Repository status Status Description Valid The Red Hat Hybrid Cloud Console has validated the repository and you can use it. Invalid The Red Hat Hybrid Cloud Console never validated this repository. You cannot use it. Unavailable The repository was valid at least once. The Red Hat Hybrid Console cannot reach this repository at the moment. You cannot use it. In progress The repository validation is in progress.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/deploying_and_managing_rhel_systems_in_hybrid_clouds/assembly_managing-repositories-in-red-hat-hybrid-cloud-console_host-management-services
11.3. Bridged Networking with libvirt
11.3. Bridged Networking with libvirt Bridged networking (also known as physical device sharing) is used to dedicate a physical device to a virtual machine. Bridging is often used for more advanced setups and on servers with multiple network interfaces. To create a bridge ( br0 ) based on the eth0 interface, execute the following command on the host: Important NetworkManager does not support bridging. NetworkManager must be disabled to use networking with the network scripts (located in the /etc/sysconfig/network-scripts/ directory). If you do not want to disable NetworkManager entirely, add " NM_CONTROLLED=no " to the ifcfg-* network script being used for the bridge.
[ "virsh iface-bridge eth0 br0", "chkconfig NetworkManager off chkconfig network on service NetworkManager stop service network start" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sect-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Network_Configuration-Network_Configuration-Bridged_networking_with_libvirt
Chapter 2. Support
Chapter 2. Support Only the configuration options described in this documentation are supported for logging. Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged . An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed . Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. Logging is not: A high scale log collection system Security Information and Event Monitoring (SIEM) compliant Historical or long term log retention or storage A guaranteed log sink Secure storage - audit logs are not stored by default 2.1. Supported API custom resource definitions LokiStack development is ongoing. Not all APIs are currently supported. Table 2.1. Loki API support states CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported in 5.5 RulerConfig rulerconfig.loki.grafana/v1 Supported in 5.7 AlertingRule alertingrule.loki.grafana/v1 Supported in 5.7 RecordingRule recordingrule.loki.grafana/v1 Supported in 5.7 2.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: The Elasticsearch custom resource (CR) The Kibana deployment The fluent.conf file The Fluentd daemon set You must set the OpenShift Elasticsearch Operator to the Unmanaged state to modify the Elasticsearch deployment files. Explicitly unsupported cases include: Configuring default log rotation . You cannot modify the default log rotation configuration. Configuring the collected log location . You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log . Throttling log collection . You cannot throttle down the rate at which the logs are read in by the log collector. Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 2.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 2.4. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and logging. Note Do not use the hack/logging-dump.sh script. The script is no longer supported and does not collect data. 2.4.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your logging, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 2.4.2. Collecting logging data You can use the oc adm must-gather CLI command to collect information about logging. Procedure To collect logging information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal .
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/logging/support
9.10. The Security Audit Logger
9.10. The Security Audit Logger Red Hat JBoss Data Grid includes a logger to audit security logs for the cache, specifically whether a cache or a cache manager operation was allowed or denied for various operations. The default audit logger is org.infinispan.security.impl.DefaultAuditLogger . This logger outputs audit logs using the available logging framework (for example, JBoss Logging) and provides results at the TRACE level and the AUDIT category. To send the AUDIT category to either a log file, a JMS queue, or a database, use the appropriate log appender. Report a bug 9.10.1. Configure the Security Audit Logger (Library Mode) Use the following to declaratively configure the audit logger in Red Hat JBoss Data Grid: Use the following to programatically configure the audit logger in JBoss Data Grid: Report a bug 9.10.2. Configure the Security Audit Logger (Remote Client-Server Mode) Use the following code to configure the audit logger in Red Hat JBoss Data Grid Remote Client-Server Mode. To use a different audit logger, specify it in the <authorization> element. The <authorization> element must be within the <cache-container> element in the Infinispan subsystem (in the standalone.xml configuration file). Note The default audit logger for server mode is org.jboss.as.clustering.infinispan.subsystem.ServerAuditLogger which sends the log messages to the server audit log. See the Management Interface Audit Logging chapter in the JBoss Enterprise Application Platform Administration and Configuration Guide for more information. Report a bug 9.10.3. Custom Audit Loggers Users can implement custom audit loggers in Red Hat JBoss Data Grid Library and Remote Client-Server Mode. The custom logger must implement the org.infinispan.security.AuditLogger interface. If no custom logger is provided, the default logger ( DefaultAuditLogger ) is used. Report a bug
[ "<infinispan> <global-security> <authorization audit-logger = \"org.infinispan.security.impl.DefaultAuditLogger\"> </authorization> </global-security> </infinispan>", "GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); global.security() .authorization() .auditLogger(new DefaultAuditLogger());", "<cache-container name=\"local\" default-cache=\"default\"> <security> <authorization audit-logger=\"org.infinispan.security.impl.DefaultAuditLogger\"> <identity-role-mapper/> <role name=\"admin\" permissions=\"ALL\"/> <role name=\"reader\" permissions=\"READ\"/> <role name=\"writer\" permissions=\"WRITE\"/> <role name=\"supervisor\" permissions=\"ALL_READ ALL_WRITE\"/> </authorization> </security> <local-cache name=\"default\" start=\"EAGER\"> <locking isolation=\"NONE\" acquire-timeout=\"30000\" concurrency-level=\"1000\" striping=\"false\"/> <transaction mode=\"NONE\"/> <security> <authorization roles=\"admin reader writer supervisor\"/> </security> </local-cache>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-the_security_audit_logger
4.4. Logical Volume Administration
4.4. Logical Volume Administration This section describes the commands that perform the various aspects of logical volume administration. 4.4.1. Creating Logical Volumes To create a logical volume, use the lvcreate command. You can create linear volumes, striped volumes, and mirrored volumes, as described in the following subsections. If you do not specify a name for the logical volume, the default name lvol # is used where # is the internal number of the logical volume. The following sections provide examples of logical volume creation for the three types of logical volumes you can create with LVM. 4.4.1.1. Creating Linear Volumes When you create a logical volume, the logical volume is carved from a volume group using the free extents on the physical volumes that make up the volume group. Normally logical volumes use up any space available on the underlying physical volumes on a -free basis. Modifying the logical volume frees and reallocates space in the physical volumes. The following command creates a logical volume 10 gigabytes in size in the volume group vg1 . The following command creates a 1500 megabyte linear logical volume named testlv in the volume group testvg , creating the block device /dev/testvg/testlv . The following command creates a 50 gigabyte logical volume named gfslv from the free extents in volume group vg0 . You can use the -l argument of the lvcreate command to specify the size of the logical volume in extents. You can also use this argument to specify the percentage of the volume group to use for the logical volume. The following command creates a logical volume called mylv that uses 60% of the total space in volume group testvol . You can also use the -l argument of the lvcreate command to specify the percentage of the remaining free space in a volume group as the size of the logical volume. The following command creates a logical volume called yourlv that uses all of the unallocated space in the volume group testvol . You can use -l argument of the lvcreate command to create a logical volume that uses the entire volume group. Another way to create a logical volume that uses the entire volume group is to use the vgdisplay command to find the "Total PE" size and to use those results as input to the the lvcreate command. The following commands create a logical volume called mylv that fills the volume group named testvg . The underlying physical volumes used to create a logical volume can be important if the physical volume needs to be removed, so you may need to consider this possibility when you create the logical volume. For information on removing a physical volume from a volume group, see Section 4.3.5, "Removing Physical Volumes from a Volume Group" . To create a logical volume to be allocated from a specific physical volume in the volume group, specify the physical volume or volumes at the end at the lvcreate command line. The following command creates a logical volume named testlv in volume group testvg allocated from the physical volume /dev/sdg1 , You can specify which extents of a physical volume are to be used for a logical volume. The following example creates a linear logical volume out of extents 0 through 25 of physical volume /dev/sda1 and extents 50 through 125 of physical volume /dev/sdb1 in volume group testvg . The following example creates a linear logical volume out of extents 0 through 25 of physical volume /dev/sda1 and then continues laying out the logical volume at extent 100. The default policy for how the extents of a logical volume are allocated is inherit , which applies the same policy as for the volume group. These policies can be changed using the lvchange command. For information on allocation policies, see Section 4.3.1, "Creating Volume Groups" . 4.4.1.2. Creating Striped Volumes For large sequential reads and writes, creating a striped logical volume can improve the efficiency of the data I/O. For general information about striped volumes, see Section 2.3.2, "Striped Logical Volumes" . When you create a striped logical volume, you specify the number of stripes with the -i argument of the lvcreate command. This determines over how many physical volumes the logical volume will be striped. The number of stripes cannot be greater than the number of physical volumes in the volume group (unless the --alloc anywhere argument is used). The stripe size should be tuned to a power of 2 between 4kB and 512kB, and matched to the application's I/O that is using the striped volume. The -I argument of the lvcreate command specifies the stripe size in kilobytes. If the underlying physical devices that make up a striped logical volume are different sizes, the maximum size of the striped volume is determined by the smallest underlying device. For example, in a two-legged stripe, the maximum size is twice the size of the smaller device. In a three-legged stripe, the maximum size is three times the size of the smallest device. The following command creates a striped logical volume across 2 physical volumes with a stripe of 64kB. The logical volume is 50 gigabytes in size, is named gfslv , and is carved out of volume group vg0 . As with linear volumes, you can specify the extents of the physical volume that you are using for the stripe. The following command creates a striped volume 100 extents in size that stripes across two physical volumes, is named stripelv and is in volume group testvg . The stripe will use sectors 0-50 of /dev/sda1 and sectors 50-100 of /dev/sdb1 . 4.4.1.3. Creating Mirrored Volumes When you create a mirrored volume, you specify the number of copies of the data to make with the -m argument of the lvcreate command. Specifying -m1 creates one mirror, which yields two copies of the file system: a linear logical volume plus one copy. Similarly, specifying -m2 creates two mirrors, yielding three copies of the file system. The following command creates a mirrored logical volume with a single mirror. The volume is 50 gigabytes in size, is named mirrorlv , and is carved out of volume group vg0 : An LVM mirror divides the device being copied into regions that, by default, are 512KB in size. You can use the -R argument to specify the region size in MB. LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors. By default, this log is kept on disk, which keeps it persistent across reboots. You can specify instead that this log be kept in memory with the --corelog argument; this eliminates the need for an extra log device, but it requires that the entire mirror be resynchronized at every reboot. The following command creates a mirrored logical volume from the volume group bigvg . The logical is named ondiskmirvol and has a single mirror. The volume is 12MB in size and keeps the mirror log in memory. When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the nosync argument to indicate that an initial synchronization from the first device is not required. You can specify which devices to use for the mirror logs and log, and which extents of the devices to use. To force the log onto a particular disk, specify exactly one extent on the disk on which it will be placed. LVM does not necessary respect the order in which devices are listed in the command line. If any physical volumes are listed that is the only space on which allocation will take place. Any physical extents included in the list that are already allocated will get ignored. The following command creates a mirrored logical volume with a single mirror. The volume is 500 megabytes in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on device /dev/sda1 , the second leg of the mirror is on device /dev/sdb1 , and the mirror log is on /dev/sdc1 . The following command creates a mirrored logical volume with a single mirror. The volume is 500 megabytes in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on extents 0 through 499 of device /dev/sda1 , the second leg of the mirror is on extents 0 through 499 of device /dev/sdb1 , and the mirror log starts on extent 0 of device /dev/sdc1 . These are 1MB extents. If any of the specified extents have already been allocated, they will be ignored. Note Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume on a single node. However, in order to create a mirrored LVM volume in a cluster the cluster and cluster mirror infrastructure must be running, the cluster must be quorate, and the locking type in the lvm.conf file must be set correctly to enable cluster locking. For an example of creating a mirrored volume in a cluster, see Section 5.5, "Creating a Mirrored LVM Logical Volume in a Cluster" . 4.4.1.4. Changing Mirrored Volume Configuration You can convert a logical volume from a mirrored volume to a linear volume or from a linear volume to a mirrored volume with the lvconvert command. You can also use this command to reconfigure other mirror parameters of an existing logical volume, such as corelog . When you convert a logical volume to a mirrored volume, you are basically creating mirror legs for an existing volume. This means that your volume group must contain the devices and space for the mirror legs and for the mirror log. If you lose a leg of a mirror, LVM converts the volume to a linear volume so that you still have access to the volume, without the mirror redundancy. After you replace the leg, you can use the lvconvert command to restore the mirror. This procedure is provided in Section 6.3, "Recovering from LVM Mirror Failure" . The following command converts the linear logical volume vg00/lvol1 to a mirrored logical volume. The following command converts the mirrored logical volume vg00/lvol1 to a linear logical volume, removing the mirror leg.
[ "lvcreate -L 10G vg1", "lvcreate -L1500 -ntestlv testvg", "lvcreate -L 50G -n gfslv vg0", "lvcreate -l 60%VG -n mylv testvg", "lvcreate -l 100%FREE -n yourlv testvg", "vgdisplay testvg | grep \"Total PE\" Total PE 10230 lvcreate -l 10230 testvg -n mylv", "lvcreate -L 1500 -ntestlv testvg /dev/sdg1", "lvcreate -l 100 -n testlv testvg /dev/sda1:0-25 /dev/sdb1:50-125", "lvcreate -l 100 -n testlv testvg /dev/sda1:0-25:100-", "lvcreate -L 50G -i2 -I64 -n gfslv vg0", "lvcreate -l 100 -i2 -nstripelv testvg /dev/sda1:0-50 /dev/sdb1:50-100 Using default stripesize 64.00 KB Logical volume \"stripelv\" created", "lvcreate -L 50G -m1 -n mirrorlv vg0", "lvcreate -L 12MB -m1 --corelog -n ondiskmirvol bigvg Logical volume \"ondiskmirvol\" created", "lvcreate -L 500M -m1 -n mirrorlv vg0 /dev/sda1 /dev/sdb1 /dev/sdc1", "lvcreate -L 500M -m1 -n mirrorlv vg0 /dev/sda1:0-499 /dev/sdb1:0-499 /dev/sdc1:0", "lvconvert -m1 vg00/lvol1", "lvconvert -m0 vg00/lvol1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/lv
Chapter 4. Exploring a service network
Chapter 4. Exploring a service network Skupper includes a command to allow you report all the sites and the services available on a service network. Prerequisites A service network with more than one site Procedure Set your Kubernetes context to a namespace on the service network. Use the following command to report the status of the service network: USD skupper network status For example: 1 The unique identifier of the site associated with the current context, that is, the west namespace 2 The site name. By default, skupper uses the name of the current namespace. If you want to specify a site name, use skupper init --site-name <site-name> . 3 The version of Skupper running the site. The site version can be different from the current skupper CLI version. To update a site to the version of the CLI, use skupper update . 4 The unique identifier of a remote site on the service network. 5 The sites that the remote site is linked to. 6 The unique identifier of a remote podman site. Podman sites do not have an associated context.
[ "skupper network status", "Sites: ├─ [local] a960b766-20bd-42c8-886d-741f3a9f6aa2(west) 1 │ │ namespace: west │ │ site name: west 2 │ │ version: 1.8.1 3 │ ╰─ Linked sites: │ ├─ 496ca1de-0c80-4e70-bbb4-d0d6ec2a09c0(east) │ │ direction: outgoing │ ╰─ 484cccc3-401c-4c30-a6ed-73382701b18a() │ direction: incoming ├─ [remote] 496ca1de-0c80-4e70-bbb4-d0d6ec2a09c0(east) 4 │ │ namespace: east │ │ site name: east │ │ version: 1.8.1 │ ╰─ Linked sites: │ ╰─ a960b766-20bd-42c8-886d-741f3a9f6aa2(west) 5 │ direction: incoming ╰─ [remote] 484cccc3-401c-4c30-a6ed-73382701b18a() 6 │ site name: vm-user-c3d98 │ version: 1.8.1 ╰─ Linked sites: ╰─ a960b766-20bd-42c8-886d-741f3a9f6aa2(west) direction: outgoing" ]
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/network-service
9.12.3. Updating the Boot Loader Configuration
9.12.3. Updating the Boot Loader Configuration Your completed Red Hat Enterprise Linux installation must be registered in the boot loader to boot properly. A boot loader is software on your machine that locates and starts the operating system. Refer to Appendix E, The GRUB Boot Loader for more information about boot loaders. Figure 9.36. The Upgrade Boot Loader Dialog If the existing boot loader was installed by a Linux distribution, the installation system can modify it to load the new Red Hat Enterprise Linux system. To update the existing Linux boot loader, select Update boot loader configuration . This is the default behavior when you upgrade an existing Red Hat Enterprise Linux installation. GRUB is the standard boot loader for Red Hat Enterprise Linux on 32-bit and 64-bit x86 architectures. If your machine uses another boot loader, such as BootMagic, System Commander, or the loader installed by Microsoft Windows, then the Red Hat Enterprise Linux installation system cannot update it. In this case, select Skip boot loader updating . When the installation process completes, refer to the documentation for your product for assistance. Install a new boot loader as part of an upgrade process only if you are certain you want to replace the existing boot loader. If you install a new boot loader, you may not be able to boot other operating systems on the same machine until you have configured the new boot loader. Select Create new boot loader configuration to remove the existing boot loader and install GRUB. After you make your selection, click to continue. If you selected the Create new boot loader configuration option, refer to Section 9.18, "x86, AMD64, and Intel 64 Boot Loader Configuration" . If you chose to update or skip boot loader configuration, installation continues without further input from you.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-upgrading-bootloader-x86
Chapter 3. PodDisruptionBudget [policy/v1]
Chapter 3. PodDisruptionBudget [policy/v1] Description PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. status object PodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget. Status may trail the actual state of a system. 3.1.1. .spec Description PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. Type object Property Type Description maxUnavailable IntOrString An eviction is allowed if at most "maxUnavailable" pods selected by "selector" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with "minAvailable". minAvailable IntOrString An eviction is allowed if at least "minAvailable" pods selected by "selector" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying "100%". selector LabelSelector Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace. unhealthyPodEvictionPolicy string UnhealthyPodEvictionPolicy defines the criteria for when unhealthy pods should be considered for eviction. Current implementation considers healthy pods, as pods that have status.conditions item with type="Ready",status="True". Valid policies are IfHealthyBudget and AlwaysAllow. If no policy is specified, the default behavior will be used, which corresponds to the IfHealthyBudget policy. IfHealthyBudget policy means that running pods (status.phase="Running"), but not yet healthy can be evicted only if the guarded application is not disrupted (status.currentHealthy is at least equal to status.desiredHealthy). Healthy pods will be subject to the PDB for eviction. AlwaysAllow policy means that all running pods (status.phase="Running"), but not yet healthy are considered disrupted and can be evicted regardless of whether the criteria in a PDB is met. This means perspective running pods of a disrupted application might not get a chance to become healthy. Healthy pods will be subject to the PDB for eviction. Additional policies may be added in the future. Clients making eviction decisions should disallow eviction of unhealthy pods if they encounter an unrecognized policy in this field. This field is beta-level. The eviction API uses this field when the feature gate PDBUnhealthyPodEvictionPolicy is enabled (enabled by default). Possible enum values: - "AlwaysAllow" policy means that all running pods (status.phase="Running"), but not yet healthy are considered disrupted and can be evicted regardless of whether the criteria in a PDB is met. This means perspective running pods of a disrupted application might not get a chance to become healthy. Healthy pods will be subject to the PDB for eviction. - "IfHealthyBudget" policy means that running pods (status.phase="Running"), but not yet healthy can be evicted only if the guarded application is not disrupted (status.currentHealthy is at least equal to status.desiredHealthy). Healthy pods will be subject to the PDB for eviction. 3.1.2. .status Description PodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget. Status may trail the actual state of a system. Type object Required disruptionsAllowed currentHealthy desiredHealthy expectedPods Property Type Description conditions array (Condition) Conditions contain conditions for PDB. The disruption controller sets the DisruptionAllowed condition. The following are known values for the reason field (additional reasons could be added in the future): - SyncFailed: The controller encountered an error and wasn't able to compute the number of allowed disruptions. Therefore no disruptions are allowed and the status of the condition will be False. - InsufficientPods: The number of pods are either at or below the number required by the PodDisruptionBudget. No disruptions are allowed and the status of the condition will be False. - SufficientPods: There are more pods than required by the PodDisruptionBudget. The condition will be True, and the number of allowed disruptions are provided by the disruptionsAllowed property. currentHealthy integer current number of healthy pods desiredHealthy integer minimum desired number of healthy pods disruptedPods object (Time) DisruptedPods contains information about pods whose eviction was processed by the API server eviction subresource handler but has not yet been observed by the PodDisruptionBudget controller. A pod will be in this map from the time when the API server processed the eviction request to the time when the pod is seen by PDB controller as having been marked for deletion (or after a timeout). The key in the map is the name of the pod and the value is the time when the API server processed the eviction request. If the deletion didn't occur and a pod is still there it will be removed from the list automatically by PodDisruptionBudget controller after some time. If everything goes smooth this map should be empty for the most of the time. Large number of entries in the map may indicate problems with pod deletions. disruptionsAllowed integer Number of pod disruptions that are currently allowed. expectedPods integer total number of pods counted by this disruption budget observedGeneration integer Most recent generation observed when updating this PDB status. DisruptionsAllowed and other status information is valid only if observedGeneration equals to PDB's object generation. 3.2. API endpoints The following API endpoints are available: /apis/policy/v1/poddisruptionbudgets GET : list or watch objects of kind PodDisruptionBudget /apis/policy/v1/watch/poddisruptionbudgets GET : watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets DELETE : delete collection of PodDisruptionBudget GET : list or watch objects of kind PodDisruptionBudget POST : create a PodDisruptionBudget /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets GET : watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name} DELETE : delete a PodDisruptionBudget GET : read the specified PodDisruptionBudget PATCH : partially update the specified PodDisruptionBudget PUT : replace the specified PodDisruptionBudget /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets/{name} GET : watch changes to an object of kind PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name}/status GET : read status of the specified PodDisruptionBudget PATCH : partially update status of the specified PodDisruptionBudget PUT : replace status of the specified PodDisruptionBudget 3.2.1. /apis/policy/v1/poddisruptionbudgets HTTP method GET Description list or watch objects of kind PodDisruptionBudget Table 3.1. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudgetList schema 401 - Unauthorized Empty 3.2.2. /apis/policy/v1/watch/poddisruptionbudgets HTTP method GET Description watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. Table 3.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets HTTP method DELETE Description delete collection of PodDisruptionBudget Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PodDisruptionBudget Table 3.5. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudgetList schema 401 - Unauthorized Empty HTTP method POST Description create a PodDisruptionBudget Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 202 - Accepted PodDisruptionBudget schema 401 - Unauthorized Empty 3.2.4. /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets HTTP method GET Description watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. Table 3.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name} Table 3.10. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget HTTP method DELETE Description delete a PodDisruptionBudget Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodDisruptionBudget Table 3.13. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodDisruptionBudget Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodDisruptionBudget Table 3.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.17. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 3.18. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty 3.2.6. /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets/{name} Table 3.19. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget HTTP method GET Description watch changes to an object of kind PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.7. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name}/status Table 3.21. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget HTTP method GET Description read status of the specified PodDisruptionBudget Table 3.22. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PodDisruptionBudget Table 3.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.24. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PodDisruptionBudget Table 3.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.26. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 3.27. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/policy_apis/poddisruptionbudget-policy-v1
4.15. Disabling ptrace()
4.15. Disabling ptrace() The ptrace() system call allows one process to observe and control the execution of another process and change its memory and registers. This call is used primarily by developers during debugging, for example when using the strace utility. When ptrace() is not needed, it can be disabled to improve system security. This can be done by enabling the deny_ptrace Boolean, which denies all processes, even those that are running in unconfined_t domains, from being able to use ptrace() on other processes. The deny_ptrace Boolean is disabled by default. To enable it, run the setsebool -P deny_ptrace on command as the root user: To verify if this Boolean is enabled, use the following command: To disable this Boolean, run the setsebool -P deny_ptrace off command as root: Note The setsebool -P command makes persistent changes. Do not use the -P option if you do not want changes to persist across reboots. This Boolean influences only packages that are part of Red Hat Enterprise Linux. Consequently, third-party packages could still use the ptrace() system call. To list all domains that are allowed to use ptrace() , enter the following command. Note that the setools-console package provides the sesearch utility and that the package is not installed by default.
[ "~]# setsebool -P deny_ptrace on", "~]USD getsebool deny_ptrace deny_ptrace --> on", "~]# setsebool -P deny_ptrace off", "~]# sesearch -A -p ptrace,sys_ptrace -C | grep -v deny_ptrace | cut -d ' ' -f 5" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-disable_ptrace
Chapter 2. Installing a cluster on VMC
Chapter 2. Installing a cluster on VMC In OpenShift Container Platform version 4.13, you can install a cluster on VMware vSphere by deploying it to VMware Cloud (VMC) on AWS . Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 2.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 2.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.4. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 2.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Table 2.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 and later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. CPU micro-architecture x86-64-v2 or higher OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation . You can verify compatibility by following the procedures outlined in this KCS article . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources For more information about CSI automatic migration, see "Overview" in VMware vSphere CSI Driver Operator . 2.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 2.6. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 2.7. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 2.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 2.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 2.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 2.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.13 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.10. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 2.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. When you have configured your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host that is co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Important Some VMware vCenter Single Sign-On (SSO) environments with Active Directory (AD) integration might primarily require you to use the traditional login method, which requires the <domain>\ construct. To ensure that vCenter account permission checks complete properly, consider using the User Principal Name (UPN) login method, such as <username>@<fully_qualified_domainname> . Select the data center in your vCenter instance to connect to. Select the default vCenter datastore to use. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.14. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 2.14.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.14.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.14.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.14.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 2.15. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 2.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.17. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 2.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vmc/installing-vmc
3.3. realmd Commands
3.3. realmd Commands The realmd system has two major task areas: managing system enrollment in a domain setting which domain users are allowed to access the local system resources The central utility in realmd is called realm . Most realm commands require the user to specify the action that the utility should perform, and the entity, such as a domain or user account, for which to perform the action: For example: Table 3.1. realmd Commands Command Description Realm Commands discover Run a discovery scan for domains on the network. join Add the system to the specified domain. leave Remove the system from the specified domain. list List all configured domains for the system or all discovered and configured domains. Login Commands permit Enable access for specified users or for all users within a configured domain to access the local system. deny Restrict access for specified users or for all users within a configured domain to access the local system. For more information about the realm commands, see the realm (8) man page.
[ "realm command arguments", "realm join ad.example.com realm permit user_name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/cmd-realmd
7.209. system-config-users
7.209. system-config-users 7.209.1. RHBA-2015:1433 - system-config-users bug fix update An updated system-config-users package that fixes one bug is now available for Red Hat Enterprise Linux 6. The system-config-users package provides a graphical utility for administrating users and groups. Bug Fix BZ# 981910 When the "INACTIVE" parameter was set in the /etc/default/useradd file, using the system-config-users utility to create or edit a user caused the user to be automatically expired. With this update, setting "INACTIVE" in /etc/default/useradd no longer gives users created or edited in system-config-users an incorrect expiration date, and thus no longer causes them to become unusable. Users of system-config-users are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-system-config-users
Preface
Preface This document provides an overview of the OpenShift Data Foundation architecture.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/red_hat_openshift_data_foundation_architecture/pr01
Chapter 2. Differences from upstream OpenJDK 17
Chapter 2. Differences from upstream OpenJDK 17 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 17 changes: FIPS support. Red Hat build of OpenJDK 17 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 17 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 17 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.8/rn-openjdk-diff-from-upstream
Chapter 24. Configuring a custom PKI
Chapter 24. Configuring a custom PKI Some platform components, such as the web console, use Routes for communication and must trust other components' certificates to interact with them. If you are using a custom public key infrastructure (PKI), you must configure it so its privately signed CA certificates are recognized across the cluster. You can leverage the Proxy API to add cluster-wide trusted CA certificates. You must do this either during installation or at runtime. During installation , configure the cluster-wide proxy . You must define your privately signed CA certificates in the install-config.yaml file's additionalTrustBundle setting. The installation program generates a ConfigMap that is named user-ca-bundle that contains the additional CA certificates you defined. The Cluster Network Operator then creates a trusted-ca-bundle ConfigMap that merges these CA certificates with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle; this ConfigMap is referenced in the Proxy object's trustedCA field. At runtime , modify the default Proxy object to include your privately signed CA certificates (part of cluster's proxy enablement workflow). This involves creating a ConfigMap that contains the privately signed CA certificates that should be trusted by the cluster, and then modifying the proxy resource with the trustedCA referencing the privately signed certificates' ConfigMap. Note The installer configuration's additionalTrustBundle field and the proxy resource's trustedCA field are used to manage the cluster-wide trust bundle; additionalTrustBundle is used at install time and the proxy's trustedCA is used at runtime. The trustedCA field is a reference to a ConfigMap containing the custom certificate and key pair used by the cluster component. 24.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 24.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Warning Enabling the cluster-wide proxy causes the Machine Config Operator (MCO) to trigger node reboot. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying. Note Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 24.3. Certificate injection using Operators Once your custom CA certificate is added to the cluster via ConfigMap, the Cluster Network Operator merges the user-provided and system CA certificates into a single bundle and injects the merged bundle into the Operator requesting the trust bundle injection. Important After adding a config.openshift.io/inject-trusted-cabundle="true" label to the config map, existing data in it is deleted. The Cluster Network Operator takes ownership of a config map and only accepts ca-bundle as data. You must use a separate config map to store service-ca.crt by using the service.beta.openshift.io/inject-cabundle=true annotation or a similar configuration. Adding a config.openshift.io/inject-trusted-cabundle="true" label and service.beta.openshift.io/inject-cabundle=true annotation on the same config map can cause issues. Operators request this injection by creating an empty ConfigMap with the following label: config.openshift.io/inject-trusted-cabundle="true" An example of the empty ConfigMap: apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: "true" name: ca-inject 1 namespace: apache 1 Specifies the empty ConfigMap name. The Operator mounts this ConfigMap into the container's local trust store. Note Adding a trusted CA certificate is only needed if the certificate is not included in the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. Certificate injection is not limited to Operators. The Cluster Network Operator injects certificates across any namespace when an empty ConfigMap is created with the config.openshift.io/inject-trusted-cabundle=true label. The ConfigMap can reside in any namespace, but the ConfigMap must be mounted as a volume to each container within a pod that requires a custom CA. For example: apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: ... spec: ... containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2 1 ca-bundle.crt is required as the ConfigMap key. 2 tls-ca-bundle.pem is required as the ConfigMap path.
[ "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "config.openshift.io/inject-trusted-cabundle=\"true\"", "apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache", "apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/configuring-a-custom-pki
Chapter 9. Deleting files from your bucket
Chapter 9. Deleting files from your bucket To delete files from your bucket from your workbench, use the delete_file() method. Prerequisites You have cloned the odh-doc-examples repository to your workbench. You have opened the s3client_examples.ipynb file in your workbench. You have installed Boto3 and configured an S3 client. You know the key of the file you want to delete and the bucket that the file is located in. Procedure In the notebook, locate the following instructions to delete files from a bucket: Replace <bucket_name> with the name of your bucket and <key> with the key of the file you want to delete, as shown in the example. Run the code cell. The output displays a HTTP response status code of 204 , which indicates that the request was successful. Verification Locate the following instructions to list files in a bucket: Replace <bucket_name> with the name of your bucket, as shown in the example and run the code cell. The deleted file does not appear in the output.
[ "#Delete files from bucket s3_client.delete_object(Bucket='<bucket_name>', Key='<object_key>')", "#Delete object from bucket s3_client.delete_object(Bucket='aqs971-image-registry', Key='/tmp/series43-image12-086.csv')", "#Delete Object Verification bucket_name = '<bucket_name>' for key in s3_client.list_objects_v2(Bucket=bucket_name)['Contents']: print(key['Key'])", "#Delete Object Verification bucket_name = 'aqs971-image-registry' for key in s3_client.list_objects_v2(Bucket=bucket_name)['Contents']: print(key['Key'])" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/deleting-files-on-your-object-store_s3
Hosted control planes
Hosted control planes OpenShift Container Platform 4.15 Using hosted control planes with OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/hosted_control_planes/index
Part III. Part III: Additional Configuration to manage CA services
Part III. Part III: Additional Configuration to manage CA services
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/part_iii_additional_configuration_to_manage_ca_services
Chapter 70. secret
Chapter 70. secret This chapter describes the commands under the secret command. 70.1. secret container create Store a container in Barbican. Usage: Table 70.1. Command arguments Value Summary -h, --help Show this help message and exit --name NAME, -n NAME A human-friendly name. --type TYPE Type of container to create (default: generic). --secret SECRET, -s SECRET One secret to store in a container (can be set multiple times). Example: --secret "private_key=https://url.test/v1/secrets/1-2-3-4" Table 70.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.2. secret container delete Delete a container by providing its href. Usage: Table 70.6. Positional arguments Value Summary URI The uri reference for the container Table 70.7. Command arguments Value Summary -h, --help Show this help message and exit 70.3. secret container get Retrieve a container by providing its URI. Usage: Table 70.8. Positional arguments Value Summary URI The uri reference for the container. Table 70.9. Command arguments Value Summary -h, --help Show this help message and exit Table 70.10. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.12. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.4. secret container list List containers. Usage: Table 70.14. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT, -l LIMIT Specify the limit to the number of items to list per page (default: 10; maximum: 100) --offset OFFSET, -o OFFSET Specify the page offset (default: 0) --name NAME, -n NAME Specify the container name (default: none) --type TYPE, -t TYPE Specify the type filter for the list (default: none). Table 70.15. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.16. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.5. secret delete Delete a secret by providing its URI. Usage: Table 70.19. Positional arguments Value Summary URI The uri reference for the secret Table 70.20. Command arguments Value Summary -h, --help Show this help message and exit 70.6. secret get Retrieve a secret by providing its URI. Usage: Table 70.21. Positional arguments Value Summary URI The uri reference for the secret. Table 70.22. Command arguments Value Summary -h, --help Show this help message and exit --decrypt, -d If specified, retrieve the unencrypted secret data. --payload, -p If specified, retrieve the unencrypted secret data. --file <filename>, -F <filename> If specified, save the payload to a new file with the given filename. --payload_content_type PAYLOAD_CONTENT_TYPE, -t PAYLOAD_CONTENT_TYPE The content type of the decrypted secret (default: text/plain). Table 70.23. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.24. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.25. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.7. secret list List secrets. Usage: Table 70.27. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT, -l LIMIT Specify the limit to the number of items to list per page (default: 10; maximum: 100) --offset OFFSET, -o OFFSET Specify the page offset (default: 0) --name NAME, -n NAME Specify the secret name (default: none) --algorithm ALGORITHM, -a ALGORITHM The algorithm filter for the list(default: none). --bit-length BIT_LENGTH, -b BIT_LENGTH The bit length filter for the list (default: 0). --mode MODE, -m MODE The algorithm mode filter for the list (default: None). --secret-type SECRET_TYPE, -s SECRET_TYPE Specify the secret type (default: none). Table 70.28. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.29. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.30. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.31. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.8. secret order create Create a new order. Usage: Table 70.32. Positional arguments Value Summary type The type of the order (key, asymmetric, certificate) to create. Table 70.33. Command arguments Value Summary -h, --help Show this help message and exit --name NAME, -n NAME A human-friendly name. --algorithm ALGORITHM, -a ALGORITHM The algorithm to be used with the requested key (default: aes). --bit-length BIT_LENGTH, -b BIT_LENGTH The bit length of the requested secret key (default: 256). --mode MODE, -m MODE The algorithm mode to be used with the requested key (default: cbc). --payload-content-type PAYLOAD_CONTENT_TYPE, -t PAYLOAD_CONTENT_TYPE The type/format of the secret to be generated (default: application/octet-stream). --expiration EXPIRATION, -x EXPIRATION The expiration time for the secret in iso 8601 format. --request-type REQUEST_TYPE The type of the certificate request. --subject-dn SUBJECT_DN The subject of the certificate. --source-container-ref SOURCE_CONTAINER_REF The source of the certificate when using stored-key requests. --ca-id CA_ID The identifier of the ca to use for the certificate request. --profile PROFILE The profile of certificate to use. --request-file REQUEST_FILE The file containing the csr. Table 70.34. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.35. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.36. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.37. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.9. secret order delete Delete an order by providing its href. Usage: Table 70.38. Positional arguments Value Summary URI The uri reference for the order Table 70.39. Command arguments Value Summary -h, --help Show this help message and exit 70.10. secret order get Retrieve an order by providing its URI. Usage: Table 70.40. Positional arguments Value Summary URI The uri reference order. Table 70.41. Command arguments Value Summary -h, --help Show this help message and exit Table 70.42. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.43. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.44. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.45. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.11. secret order list List orders. Usage: Table 70.46. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT, -l LIMIT Specify the limit to the number of items to list per page (default: 10; maximum: 100) --offset OFFSET, -o OFFSET Specify the page offset (default: 0) Table 70.47. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.48. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.49. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.50. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.12. secret store Store a secret in Barbican. Usage: Table 70.51. Command arguments Value Summary -h, --help Show this help message and exit --name NAME, -n NAME A human-friendly name. --secret-type SECRET_TYPE, -s SECRET_TYPE The secret type; must be one of symmetric, public, private, certificate, passphrase, opaque (default) --payload-content-type PAYLOAD_CONTENT_TYPE, -t PAYLOAD_CONTENT_TYPE The type/format of the provided secret data; "text/plain" is assumed to be UTF-8; required when --payload is supplied. --payload-content-encoding PAYLOAD_CONTENT_ENCODING, -e PAYLOAD_CONTENT_ENCODING Required if --payload-content-type is "application/octet-stream". --algorithm ALGORITHM, -a ALGORITHM The algorithm (default: aes). --bit-length BIT_LENGTH, -b BIT_LENGTH The bit length (default: 256). --mode MODE, -m MODE The algorithm mode; used only for reference (default: cbc) --expiration EXPIRATION, -x EXPIRATION The expiration time for the secret in iso 8601 format. --payload PAYLOAD, -p PAYLOAD The unencrypted secret data. --file <filename>, -F <filename> File containing the secret payload Table 70.52. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.53. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.54. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.55. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.13. secret update Update a secret with no payload in Barbican. Usage: Table 70.56. Positional arguments Value Summary URI The uri reference for the secret. payload The unencrypted secret Table 70.57. Command arguments Value Summary -h, --help Show this help message and exit
[ "openstack secret container create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] [--type TYPE] [--secret SECRET]", "openstack secret container delete [-h] URI", "openstack secret container get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] URI", "openstack secret container list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit LIMIT] [--offset OFFSET] [--name NAME] [--type TYPE]", "openstack secret delete [-h] URI", "openstack secret get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--decrypt | --payload | --file <filename>] [--payload_content_type PAYLOAD_CONTENT_TYPE] URI", "openstack secret list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit LIMIT] [--offset OFFSET] [--name NAME] [--algorithm ALGORITHM] [--bit-length BIT_LENGTH] [--mode MODE] [--secret-type SECRET_TYPE]", "openstack secret order create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] [--algorithm ALGORITHM] [--bit-length BIT_LENGTH] [--mode MODE] [--payload-content-type PAYLOAD_CONTENT_TYPE] [--expiration EXPIRATION] [--request-type REQUEST_TYPE] [--subject-dn SUBJECT_DN] [--source-container-ref SOURCE_CONTAINER_REF] [--ca-id CA_ID] [--profile PROFILE] [--request-file REQUEST_FILE] type", "openstack secret order delete [-h] URI", "openstack secret order get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] URI", "openstack secret order list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit LIMIT] [--offset OFFSET]", "openstack secret store [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] [--secret-type SECRET_TYPE] [--payload-content-type PAYLOAD_CONTENT_TYPE] [--payload-content-encoding PAYLOAD_CONTENT_ENCODING] [--algorithm ALGORITHM] [--bit-length BIT_LENGTH] [--mode MODE] [--expiration EXPIRATION] [--payload PAYLOAD | --file <filename>]", "openstack secret update [-h] URI payload" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/secret
Chapter 8. Creating a Keycloak instance
Chapter 8. Creating a Keycloak instance When the Red Hat Single Sign-On Operator is installed you can create a Keycloak instance for use with Ansible Automation Platform. From here you provide an external Postgres or one will be created for you. Procedure Navigate to Operator Installed Operators . Select the rh-sso project. Select the Red Hat Single Sign-On Operator . On the Red Hat Single Sign-On Operator details page select Keycloak . Click Create instance . Click YAML view . The default Keycloak custom resource is as follows: apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso namespace: aap spec: externalAccess: enabled: true instances: 1 Click Create . When deployment is complete, you can use this credential to login to the administrative console. You can find the credentials for the administrator in the credential-<custom-resource> (example keycloak) secret in the namespace.
[ "apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso namespace: aap spec: externalAccess: enabled: true instances: 1" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/proc-create-keycloak-instance_using-a-rhsso-operator
5.174. lohit-telugu-fonts
5.174. lohit-telugu-fonts 5.174.1. RHBA-2012:1212 - lohit-telugu-fonts bug fix update An updated lohit-telugu-fonts package that fixes one bug is now available for Red Hat Enterprise Linux 6. The lohit-telugu-fonts package provides a free Telugu TrueType/OpenType font. Bug Fix BZ# 640610 Due to a bug in the lohit-telugu-fonts package, four certain syllables were rendering incorrectly. This bug has been fixed and these syllables now render correctly. All users of lohit-telugu-fonts are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/lohit-telugu-fonts
Chapter 9. Configuring virtual GPUs for instances
Chapter 9. Configuring virtual GPUs for instances To support GPU-based rendering on your instances, you can define and manage virtual GPU (vGPU) resources according to your available physical GPU devices and your hypervisor type. You can use this configuration to effectively spread the rendering workloads between all your physical GPU devices , and to control the scheduling of your vGPU-enabled instances. To enable vGPU in the Compute service (nova), you must perform the following tasks: Identify the nodes on which you want to configure vGPUs. Retrieve the PCI address for each physical GPU on each Compute node, or for each SR-IOV virtual function (VF) if the GPU supports SR-IOV. Configure the GPU profiles on each Compute node. Each instance hosted on the configured Compute nodes can support GPU workloads with virtual GPU devices that correspond to the physical GPU devices. The Compute service tracks the number of vGPU devices that are available for each GPU profile you define on each host. The Compute service schedules instances to these hosts, attaches the devices, and monitors the vGPU usage. When an instance is deleted, the Compute service adds the vGPU devices back to the available pool. Important Red Hat enables the use of NVIDIA vGPU in RHOSO without the requirement for support exceptions. However, Red Hat does not provide technical support for the NVIDIA vGPU drivers. The NVIDIA vGPU drivers are shipped and supported by NVIDIA. You require an NVIDIA Certified Support Services subscription to obtain NVIDIA Enterprise Support for NVIDIA vGPU software. For issues that result from the use of NVIDIA vGPUs where you are unable to reproduce the issue on a supported component, the following support policies apply: When Red Hat does not suspect that the third-party component is involved in the issue, the normal Scope of Support and Red Hat SLA apply. When Red Hat suspects that the third-party component is involved in the issue, the customer will be directed to NVIDIA in line with the Red Hat third party support and certification policies . For more information, see the Knowledge Base article Obtaining Support from NVIDIA . 9.1. Supported configurations and limitations Supported GPU cards For a list of supported NVIDIA GPU cards, see Virtual GPU Software Supported Products on the NVIDIA website. Limitations when using vGPU devices Each instance can use only one vGPU resource. Live migration of vGPU instances between hosts is not supported. Evacuation of vGPU instances is not supported. If you need to reboot the Compute node that hosts the vGPU instances, the vGPUs are not automatically reassigned to the recreated instances. You must either cold migrate the instances before you reboot the Compute node, or manually allocate each vGPU to the correct instance after reboot. To manually allocate each vGPU, you must retrieve the mdev UUID from the instance XML for each vGPU instance that runs on the Compute node before you reboot. You can use the following command to discover the mdev UUID for each instance: Replace <instance_name> with the libvirt instance name, OS-EXT-SRV-ATTR:instance_name , returned in a /servers request to the Compute API. By default, vGPU types on Compute hosts are not exposed to API users. To expose the vGPU types on Compute hosts to API users, you must configure resource provider traits and create flavors that require the traits. Alternatively, if you only have one vGPU type, you can grant access by adding the hosts to a host aggregate. For more information, see Creating and managing host aggregates . If you use NVIDIA accelerator hardware, you must comply with the NVIDIA licensing requirements. For example, NVIDIA vGPU GRID requires a licensing server. For more information about the NVIDIA licensing requirements, see NVIDIA License Server Release Notes on the NVIDIA website. 9.2. Preparing to configure the Compute service for vGPU Before you configure the Compute service for vGPU, you must prepare the data plane nodes that you want to use for vGPU and you must download and install the NVIDIA device driver. Procedure Access the remote shell for openstackclient : Identify a node that you want to use for vGPU: Retrieve the IP address of the Compute node that you want to use for vGPU: Use SSH to connect to the data plane node: Create the file /etc/modprobe.d/blacklist-nouveau.conf . Disable the nouveau driver by adding the following configuration to blacklist-nouveau.conf : Regenerate the initramfs : Download and install the NVIDIA driver from the NVIDIA portal. For more information, see NVIDIA DOCS HUB . Reboot the node: Repeat this procedure for all nodes that you want to allocate for vGPU instances. 9.3. Configuring the Compute service for vGPU You need to retrieve and assign the vGPU type that corresponds to the physical GPU device in your environment and configure a vGPU type. Note You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes. Prerequisites The oc command line tool is installed on your workstation. You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges. You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes that you can configure vGPU on. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide. Procedure Virtual GPUs are mediated devices. Retrieve the PCI address for each device that can create mediated devices on each Compute node: Note The PCI address of the GPU - or the GPU SR-IOV virtual function (VF) that can create vGPUs - is used as the device driver directory name, for example, 0000:84:00.0. In this procedure, the vGPU-capable resource is called an mdev device. Note Recent generations of NVIDIA cards now support SR-IOV. Refer to the NVIDIA documentation to discover if your GPU is SR-IOV-capable. Review the supported mdev types for each available pGPU device on each Compute node to discover the available vGPU types: Replace <mdev_device> with the PCI address for the mdev device, for example, 0000:84:00.0. For example, the following Compute node has 4 pGPUs, and each pGPU supports the same 11 vGPU types: Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [devices] : For more information about creating ConfigMap objects, see Creating and using config maps . Optional: To configure more than one vGPU type, map the supported vGPU types to the pGPUs: The nvidia-35 vGPU type is supported by the pGPUs that are in the PCI addresses 0000:84:00.0 and 0000:85:00.0. The nvidia-36 vGPU type is supported only by the pGPUs that are in the PCI address 0000:86:00.0. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_vgpu_deploy.yaml on your workstation: In the compute_vgpu_deploy.yaml CR, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes that you want to want to use for vGPU. Warning If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps: Check the services list of the node set and find the name of the DataPlaneService that points to nova. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova . If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config , then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap . If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets. Replace <nodeSet_name> with the names of the `OpenStackDataPlaneNodeSet`CRs that you want to include in your data plane deployment. Save the compute_vgpu_deploy.yaml deployment file. Deploy the data plane: Verify that the data plane is deployed: Tip Append the -w option to the end of the get command to track deployment progress. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane: Optional: Enable SR-IOV VFs of the GPUs. For more information, see Preparing virtual function for SRIOV vGPU on the NVIDIA DOCS HUB.
[ "virsh dumpxml <instance_name> | grep mdev", "oc rsh openstackclient", "openstack hypervisor list", "ssh <node_ipaddress>", "blacklist nouveau options nouveau modeset=0", "dracut --force grub2-mkconfig -o /boot/grub2/grub.cfg --update-bls-cmdline", "sudo reboot", "ls /sys/class/mdev_bus/", "ls /sys/class/mdev_bus/<mdev_device>/mdev_supported_types", "ls /sys/class/mdev_bus/0000:84:00.0/mdev_supported_types: NVIDIA-35 NVIDIA-36 NVIDIA-37 NVIDIA-38 NVIDIA-39 NVIDIA-40 NVIDIA-41 NVIDIA-42 NVIDIA-43 NVIDIA-44 NVIDIA-45 ls /sys/class/mdev_bus/0000:85:00.0/mdev_supported_types: NVIDIA-35 NVIDIA-36 NVIDIA-37 NVIDIA-38 NVIDIA-39 NVIDIA-40 NVIDIA-41 NVIDIA-42 NVIDIA-43 NVIDIA-44 NVIDIA-45 ls /sys/class/mdev_bus/0000:86:00.0/mdev_supported_types: NVIDIA-35 NVIDIA-36 NVIDIA-37 NVIDIA-38 NVIDIA-39 NVIDIA-40 NVIDIA-41 NVIDIA-42 NVIDIA-43 NVIDIA-44 NVIDIA-45 ls /sys/class/mdev_bus/0000:87:00.0/mdev_supported_types: NVIDIA-35 NVIDIA-36 NVIDIA-37 NVIDIA-38 NVIDIA-39 NVIDIA-40 NVIDIA-41 NVIDIA-42 NVIDIA-43 NVIDIA-44 NVIDIA-45", "apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 34-nova-vgpu.conf: | [devices] enabled_mdev_types = nvidia-35, nvidia-36", "[devices] enabled_mdev_types = nvidia-35, nvidia-36 [mdev_nvidia-35] device_addresses = 0000:84:00.0,0000:85:00.0 [vgpu_nvidia-36] device_addresses = 0000:86:00.0", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-vgpu", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-vgpu spec: nodeSets: - openstack-edpm - compute-vgpu - - <nodeSet_name>", "oc create -f compute_vgpu_deploy.yaml", "oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-vgpu True Deployed", "oc rsh -n openstack openstackclient openstack hypervisor list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-virtual-gpus-for-instances_vgpu
7.78. gvfs
7.78. gvfs 7.78.1. RHBA-2012:1124 - gvfs bug fix and enhancement update Updated gvfs packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. GVFS is the GNOME desktop's virtual file system layer, which allows users to easily access local and remote data, including via the FTP, SFTP, WebDAV, CIFS and SMB protocols, among others. GVFS integrates with the GIO (GNOME I/O) abstraction layer. Bug Fixes BZ#599055 Previously, rules for ignoring mounts were too restrictive. If the user clicked on an encrypted volume in the Nautilus' sidebar, an error message was displayed and the volume could not be accessed. The underlying source code now contains additional checks so that encrypted volumes have proper mounts associated (if available), and the file system can be browsed as expected. BZ#669526 Due to a bug in the kernel, a freshly formatted Blu-ray Disk Rewritable (BD-RE) medium contains a single track with invalid data that covers the whole medium. This empty track was previously incorrectly detected, causing the drive to be unusable for certain applications, such as Brasero. This update adds a workaround to detect the empty track, so that freshly formatted BD-RE media are properly recognized as blank. BZ#682799, BZ# 746977 , BZ# 746978 , BZ# 749369 , BZ# 749371 , BZ# 749372 The code of the gvfs-info, gvfs-open, gvfs-cat, gvfs-ls and gvfs-mount utilities contained hard-coded exit codes. This caused the utilities to always return zero on exit. The exit codes have been revised so that the mentioned gvfs utilities now return proper exit codes. BZ#746905 When running gvfs-set-attribute with an invalid command-line argument specified, the utility terminated unexpectedly with a segmentation fault. The underlying source code has been modified so that the utility now prints a proper error message when an invalid argument is specified. BZ#809708 Due to missing object cleanup calls, the gvfsd daemon could use excessive amount of memory, which caused the system to become unresponsive. Proper object cleanup calls have been added with this update, which ensures that the memory consumption is constant and the system does not hang in this scenario. All users of gvfs are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gvfs
Chapter 14. Performing latency tests for platform verification
Chapter 14. Performing latency tests for platform verification You can use the Cloud-native Network Functions (CNF) tests image to run latency tests on a CNF-enabled OpenShift Container Platform cluster, where all the components required for running CNF workloads are installed. Run the latency tests to validate node tuning for your workload. The cnf-tests container image is available at registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 . Important The cnf-tests image also includes several tests that are not supported by Red Hat at this time. Only the latency tests are supported by Red Hat. 14.1. Prerequisites for running latency tests Your cluster must meet the following requirements before you can run the latency tests: You have configured a performance profile with the Node Tuning Operator. You have applied all the required CNF configurations in the cluster. You have a pre-existing MachineConfigPool CR applied in the cluster. The default worker pool is worker-cnf . Additional resources For more information about creating the cluster performance profile, see Provisioning a worker with real-time capabilities . 14.2. About discovery mode for latency tests Use discovery mode to validate the functionality of a cluster without altering its configuration. Existing environment configurations are used for the tests. The tests can find the configuration items needed and use those items to execute the tests. If resources needed to run a specific test are not found, the test is skipped, providing an appropriate message to the user. After the tests are finished, no cleanup of the preconfigured configuration items is done, and the test environment can be immediately used for another test run. Important When running the latency tests, always run the tests with -e DISCOVERY_MODE=true and -ginkgo.focus set to the appropriate latency test. If you do not run the latency tests in discovery mode, your existing live cluster performance profile configuration will be modified by the test run. Limiting the nodes used during tests The nodes on which the tests are executed can be limited by specifying a NODES_SELECTOR environment variable, for example, -e NODES_SELECTOR=node-role.kubernetes.io/worker-cnf . Any resources created by the test are limited to nodes with matching labels. Note If you want to override the default worker pool, pass the -e ROLE_WORKER_CNF=<custom_worker_pool> variable to the command specifying an appropriate label. 14.3. Measuring latency The cnf-tests image uses three tools to measure the latency of the system: hwlatdetect cyclictest oslat Each tool has a specific use. Use the tools in sequence to achieve reliable test results. hwlatdetect Measures the baseline that the bare-metal hardware can achieve. Before proceeding with the latency test, ensure that the latency reported by hwlatdetect meets the required threshold because you cannot fix hardware latency spikes by operating system tuning. cyclictest Verifies the real-time kernel scheduler latency after hwlatdetect passes validation. The cyclictest tool schedules a repeated timer and measures the difference between the desired and the actual trigger times. The difference can uncover basic issues with the tuning caused by interrupts or process priorities. The tool must run on a real-time kernel. oslat Behaves similarly to a CPU-intensive DPDK application and measures all the interruptions and disruptions to the busy loop that simulates CPU heavy data processing. The tests introduce the following environment variables: Table 14.1. Latency test environment variables Environment variables Description LATENCY_TEST_DELAY Specifies the amount of time in seconds after which the test starts running. You can use the variable to allow the CPU manager reconcile loop to update the default CPU pool. The default value is 0. LATENCY_TEST_CPUS Specifies the number of CPUs that the pod running the latency tests uses. If you do not set the variable, the default configuration includes all isolated CPUs. LATENCY_TEST_RUNTIME Specifies the amount of time in seconds that the latency test must run. The default value is 300 seconds. HWLATDETECT_MAXIMUM_LATENCY Specifies the maximum acceptable hardware latency in microseconds for the workload and operating system. If you do not set the value of HWLATDETECT_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool compares the default expected threshold (20ms) and the actual maximum latency in the tool itself. Then, the test fails or succeeds accordingly. CYCLICTEST_MAXIMUM_LATENCY Specifies the maximum latency in microseconds that all threads expect before waking up during the cyclictest run. If you do not set the value of CYCLICTEST_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool skips the comparison of the expected and the actual maximum latency. OSLAT_MAXIMUM_LATENCY Specifies the maximum acceptable latency in microseconds for the oslat test results. If you do not set the value of OSLAT_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool skips the comparison of the expected and the actual maximum latency. MAXIMUM_LATENCY Unified variable that specifies the maximum acceptable latency in microseconds. Applicable for all available latency tools. LATENCY_TEST_RUN Boolean parameter that indicates whether the tests should run. LATENCY_TEST_RUN is set to false by default. To run the latency tests, set this value to true . Note Variables that are specific to a latency tool take precedence over unified variables. For example, if OSLAT_MAXIMUM_LATENCY is set to 30 microseconds and MAXIMUM_LATENCY is set to 10 microseconds, the oslat test will run with maximum acceptable latency of 30 microseconds. 14.4. Running the latency tests Run the cluster latency tests to validate node tuning for your Cloud-native Network Functions (CNF) workload. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Procedure Open a shell prompt in the directory containing the kubeconfig file. You provide the test image with a kubeconfig file in current directory and its related USDKUBECONFIG environment variable, mounted through a volume. This allows the running container to use the kubeconfig file from inside the container. Run the latency tests by entering the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test" Optional: Append -ginkgo.dryRun to run the latency tests in dry-run mode. This is useful for checking what the tests run. Optional: Append -ginkgo.v to run the tests with increased verbosity. Optional: To run the latency tests against a specific performance profile, run the following command, substituting appropriate values: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e FEATURES=performance -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ -e PERF_TEST_PROFILE=<performance_profile> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/test-run.sh -ginkgo.focus="[performance]\ Latency\ Test" where: <performance_profile> Is the name of the performance profile you want to run the latency tests against. Important For valid latency test results, run the tests for at least 12 hours. 14.4.1. Running hwlatdetect The hwlatdetect tool is available in the rt-kernel package with a regular subscription of Red Hat Enterprise Linux (RHEL) 8.x. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have installed the real-time kernel in the cluster. You have logged in to registry.redhat.io with your Customer Portal credentials. Procedure To run the hwlatdetect tests, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf \ -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus="hwlatdetect" The hwlatdetect test runs for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Important For valid results, the test should run for at least 12 hours. Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 194 specs [...] • Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (366.08s) FAIL 1 You can configure the latency threshold by using the MAXIMUM_LATENCY or the HWLATDETECT_MAXIMUM_LATENCY environment variables. 2 The maximum latency value measured during the test. Example hwlatdetect test results You can capture the following types of results: Rough results that are gathered after each run to create a history of impact on any changes made throughout the test. The combined set of the rough tests with the best results and configuration settings. Example of good results hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0 The hwlatdetect tool only provides output if the sample exceeds the specified threshold. Example of bad results hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63 The output of hwlatdetect shows that multiple samples exceed the threshold. However, the same output can indicate different results based on the following factors: The duration of the test The number of CPU cores The host firmware settings Warning Before proceeding with the latency test, ensure that the latency reported by hwlatdetect meets the required threshold. Fixing latencies introduced by hardware might require you to contact the system vendor support. Not all latency spikes are hardware related. Ensure that you tune the host firmware to meet your workload requirements. For more information, see Setting firmware parameters for system tuning . 14.4.2. Running cyclictest The cyclictest tool measures the real-time kernel scheduler latency on the specified CPUs. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have logged in to registry.redhat.io with your Customer Portal credentials. You have installed the real-time kernel in the cluster. You have applied a cluster performance profile by using Node Tuning Operator. Procedure To perform the cyclictest , run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus="cyclictest" The command runs the cyclictest tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (in this example, 20 ms). Latency spikes of 20 ms and above are generally not acceptable for telco RAN workloads. If the results exceed the latency threshold, the test fails. Important For valid results, the test should run for at least 12 hours. Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 194 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.48s) FAIL Example cyclictest results The same output can indicate different results for different workloads. For example, spikes up to 18ms are acceptable for 4G DU workloads, but not for 5G DU workloads. Example of good results running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries ... # Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 # Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 # Histogram Overflow at cycle number: # Thread 0: # Thread 1: # Thread 2: # Thread 3: # Thread 4: # Thread 5: # Thread 6: # Thread 7: # Thread 8: # Thread 9: # Thread 10: # Thread 11: # Thread 12: # Thread 13: # Thread 14: # Thread 15: Example of bad results running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries ... # Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 # Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 # Histogram Overflow at cycle number: # Thread 0: 155922 # Thread 1: 110064 # Thread 2: 110064 # Thread 3: 110063 155921 # Thread 4: 110063 155921 # Thread 5: 155920 # Thread 6: # Thread 7: 110062 # Thread 8: 110062 # Thread 9: 155919 # Thread 10: 110061 155919 # Thread 11: 155918 # Thread 12: 155918 # Thread 13: 110060 # Thread 14: 110060 # Thread 15: 110059 155917 14.4.3. Running oslat The oslat test simulates a CPU-intensive DPDK application and measures all the interruptions and disruptions to test how the cluster handles CPU heavy data processing. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have logged in to registry.redhat.io with your Customer Portal credentials. You have applied a cluster performance profile by using the Node Tuning Operator. Procedure To perform the oslat test, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus="oslat" LATENCY_TEST_CPUS specifies the list of CPUs to test with the oslat command. The command runs the oslat tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Important For valid results, the test should run for at least 12 hours. Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 194 specs [...] • Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.42s) FAIL 1 In this example, the measured latency is outside the maximum allowed value. 14.5. Generating a latency test failure report Use the following procedures to generate a JUnit latency test output and test failure report. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a test failure report with information about the cluster state and resources for troubleshooting by passing the --report parameter with the path to where the report is dumped: USD podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> \ -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/test-run.sh --report <report_folder_path> \ -ginkgo.focus="\[performance\]\ Latency\ Test" where: <report_folder_path> Is the path to the folder where the report is generated. 14.6. Generating a JUnit latency test report Use the following procedures to generate a JUnit latency test output and test failure report. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a JUnit-compliant XML report by passing the --junit parameter together with the path to where the report is dumped: USD podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junitdest:<junit_folder_path> \ -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/test-run.sh --junit <junit_folder_path> \ -ginkgo.focus="\[performance\]\ Latency\ Test" where: <junit_folder_path> Is the path to the folder where the junit report is generated 14.7. Running latency tests on a single-node OpenShift cluster You can run latency tests on single-node OpenShift clusters. Important Always run the latency tests with DISCOVERY_MODE=true set. If you don't, the test suite will make changes to the running cluster configuration. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure To run the latency tests on a single-node OpenShift cluster, run the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=master \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test" Note ROLE_WORKER_CNF=master is required because master is the only machine pool to which the node belongs. For more information about setting the required MachineConfigPool for the latency tests, see "Prerequisites for running latency tests". After running the test suite, all the dangling resources are cleaned up. 14.8. Running latency tests in a disconnected cluster The CNF tests image can run tests in a disconnected cluster that is not able to reach external registries. This requires two steps: Mirroring the cnf-tests image to the custom disconnected registry. Instructing the tests to consume the images from the custom disconnected registry. Mirroring the images to a custom registry accessible from the cluster A mirror executable is shipped in the image to provide the input required by oc to mirror the test image to a local registry. Run this command from an intermediate machine that has access to the cluster and registry.redhat.io : USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f - where: <disconnected_registry> Is the disconnected mirror registry you have configured, for example, my.local.registry:5000/ . When you have mirrored the cnf-tests image into the disconnected registry, you must override the original registry used to fetch the images when running the tests, for example: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY="<disconnected_registry>" \ -e CNF_TESTS_IMAGE="cnf-tests-rhel8:v4.12" \ /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test" Configuring the tests to consume images from a custom registry You can run the latency tests using a custom test image and image registry using CNF_TESTS_IMAGE and IMAGE_REGISTRY variables. To configure the latency tests to use a custom test image and image registry, run the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<custom_image_registry>" \ -e CNF_TESTS_IMAGE="<custom_cnf-tests_image>" \ -e FEATURES=performance \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh where: <custom_image_registry> is the custom image registry, for example, custom.registry:5000/ . <custom_cnf-tests_image> is the custom cnf-tests image, for example, custom-cnf-tests-image:latest . Mirroring images to the cluster OpenShift image registry OpenShift Container Platform provides a built-in container image registry, which runs as a standard workload on the cluster. Procedure Gain external access to the registry by exposing it with a route: USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Fetch the registry endpoint by running the following command: USD REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') Create a namespace for exposing the images: USD oc create ns cnftests Make the image stream available to all the namespaces used for tests. This is required to allow the tests namespaces to fetch the images from the cnf-tests image stream. Run the following commands: USD oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests USD oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests Retrieve the docker secret name and auth token by running the following commands: USD SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'} USD TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath="{.data['\.dockercfg']}" | base64 --decode | jq '.["image-registry.openshift-image-registry.svc:5000"].auth') Create a dockerauth.json file, for example: USD echo "{\"auths\": { \"USDREGISTRY\": { \"auth\": USDTOKEN } }}" > dockerauth.json Do the image mirroring: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:4.12 \ /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true \ -a=USD(pwd)/dockerauth.json -f - Run the tests: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests \ cnf-tests-local:latest /usr/bin/test-run.sh -ginkgo.focus="\[performance\]\ Latency\ Test" Mirroring a different set of test images You can optionally change the default upstream images that are mirrored for the latency tests. Procedure The mirror command tries to mirror the upstream images by default. This can be overridden by passing a file with the following format to the image: [ { "registry": "public.registry.io:5000", "image": "imageforcnftests:4.12" } ] Pass the file to the mirror command, for example saving it locally as images.json . With the following command, the local path is mounted in /kubeconfig inside the container and that can be passed to the mirror command. USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/mirror \ --registry "my.local.registry:5000/" --images "/kubeconfig/images.json" \ | oc image mirror -f - 14.9. Troubleshooting errors with the cnf-tests container To run latency tests, the cluster must be accessible from within the cnf-tests container. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Verify that the cluster is accessible from inside the cnf-tests container by running the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 \ oc get nodes If this command does not work, an error related to spanning across DNS, MTU size, or firewall access might be occurring.
[ "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e FEATURES=performance -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 -e PERF_TEST_PROFILE=<performance_profile> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh -ginkgo.focus=\"[performance]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"hwlatdetect\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 194 specs [...] • Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (366.08s) FAIL", "hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0", "hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"cyclictest\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 194 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.48s) FAIL", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 Histogram Overflow at cycle number: Thread 0: Thread 1: Thread 2: Thread 3: Thread 4: Thread 5: Thread 6: Thread 7: Thread 8: Thread 9: Thread 10: Thread 11: Thread 12: Thread 13: Thread 14: Thread 15:", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 Histogram Overflow at cycle number: Thread 0: 155922 Thread 1: 110064 Thread 2: 110064 Thread 3: 110063 155921 Thread 4: 110063 155921 Thread 5: 155920 Thread 6: Thread 7: 110062 Thread 8: 110062 Thread 9: 155919 Thread 10: 110061 155919 Thread 11: 155918 Thread 12: 155918 Thread 13: 110060 Thread 14: 110060 Thread 15: 110059 155917", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"oslat\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 194 specs [...] • Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 193 Skipped --- FAIL: TestTest (161.42s) FAIL", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh --report <report_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junitdest:<junit_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh --junit <junit_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e ROLE_WORKER_CNF=master registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY=\"<disconnected_registry>\" -e CNF_TESTS_IMAGE=\"cnf-tests-rhel8:v4.12\" /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<custom_image_registry>\" -e CNF_TESTS_IMAGE=\"<custom_cnf-tests_image>\" -e FEATURES=performance registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/test-run.sh", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc create ns cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests", "SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'}", "TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')", "echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:4.12 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e FEATURES=performance -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"", "[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.12\" } ]", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.12 get nodes" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/cnf-performing-platform-verification-latency-tests
Installing Red Hat Trusted Application Pipeline
Installing Red Hat Trusted Application Pipeline Red Hat Trusted Application Pipeline 1.4 Learn how to install Red Hat Trusted Application Pipeline in your cluster. Red Hat Customer Content Services
[ "podman login registry.redhat.io", "podman pull registry.redhat.io/rhtap-cli/rhtap-cli-rhel9:latest", "podman run -it --entrypoint=bash --publish 8228:8228 --rm rhtap-cli:latest", "bash-5.1USD oc login https://api.<input omitted>.openshiftapps.com:443 --username cluster-admin --password <input omitted>", "bash-5.1USD rhtap-cli integration github-app --create --token=\"USDGH_TOKEN\" --org=\"USDGH_ORG_NAME\" USDGH_APP_NAME", "bash-5.1USD rhtap-cli integration acs --endpoint=\"USDACS_ENDPOINT\" --token=\"USDACS_TOKEN\"", "bash-5.1USD rhtap-cli integration quay --dockerconfigjson='USDQUAY_DOCKERCONFIGJSON' --token=\"USDQUAY_TOKEN\" --url=\"USDQUAY_URL\"", "bash-5.1USD rhtap-cli integration bitbucket --username=\"USDBB_USERNAME\" --app-password=\"USDBB_TOKEN\" --host=\"USDBB_URL\"", "bash-5.1USD rhtap-cli integration gitlab --token=\"USDGL_API_TOKEN\" --host=\"USDGL_URL\"", "bash-5.1USD rhtap-cli integration jenkins --token=\"USDJK_API_TOKEN\" --url=\"USDJK_URL\" --username=\"USDJK_USERNAME\"", "bash-5.1USD rhtap-cli integration artifactory --url=\"USDAF_URL\" --dockerconfigjson='USDAF_DOCKERCONFIGJSON' --token=\"USDAF_API_TOKEN\"", "bash-5.1USD cp config.yaml my-config.yaml", "bash-5.1USD vi my-config.yaml", "redHatDeveloperHub: enabled: &rhdhEnabled true namespace: *installerNamespace properties: catalogURL: https://github.com/<your username>/tssc-sample-templates/blob/release/all.yaml", "redHatAdvancedClusterSecurity: enabled: &rhacsEnabled false namespace: rhtap-acs", "bash-5.1USD rhtap-cli deploy --config=USDCONFIG" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html-single/installing_red_hat_trusted_application_pipeline/index
Chapter 53. DeploymentTemplate schema reference
Chapter 53. DeploymentTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , JmxTransTemplate , KafkaBridgeTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate Full list of DeploymentTemplate schema properties Use deploymentStrategy to specify the strategy used to replace old pods with new ones when deployment configuration changes. Use one of the following values: RollingUpdate : Pods are restarted with zero downtime. Recreate : Pods are terminated before new ones are created. Using the Recreate deployment strategy has the advantage of not requiring spare resources, but the disadvantage is the application downtime. Example showing the deployment strategy set to Recreate . # ... template: deployment: deploymentStrategy: Recreate # ... This configuration change does not cause a rolling update. 53.1. DeploymentTemplate schema properties Property Property type Description metadata MetadataTemplate Metadata applied to the resource. deploymentStrategy string (one of [RollingUpdate, Recreate]) Pod replacement strategy for deployment configuration changes. Valid values are RollingUpdate and Recreate . Defaults to RollingUpdate .
[ "template: deployment: deploymentStrategy: Recreate" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-DeploymentTemplate-reference
Providing Feedback on Red Hat Documentation
Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_connected_network_environment/providing-feedback-on-red-hat-documentation_satellite
Chapter 2. Differences from upstream OpenJDK 11
Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.14/rn-openjdk-diff-from-upstream
Chapter 6. Using operational features of Service Telemetry Framework
Chapter 6. Using operational features of Service Telemetry Framework You can use the following operational features to provide additional functionality to the Service Telemetry Framework (STF): Configuring dashboards Configuring the metrics retention time period Configuring alerts Configuring SNMP traps Configuring high availability Configuring an alternate observability strategy Monitoring the resource use of OpenStack services Monitoring container health and API status 6.1. Dashboards in Service Telemetry Framework Use the third-party application, Grafana, to visualize system-level metrics that the data collectors collectd and Ceilometer gather for each individual host node. For more information about configuring data collectors, see Section 4.1, "Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director" . You can use dashboards to monitor a cloud: Infrastructure dashboard Use the infrastructure dashboard to view metrics for a single node at a time. Select a node from the upper left corner of the dashboard. Cloud view dashboard Use the cloud view dashboard to view panels to monitor service resource usage, API stats, and cloud events. You must enable API health monitoring and service monitoring to provide the data for this dashboard. API health monitoring is enabled by default in the STF base configuration. For more information, see Section 4.1.3, "Creating the base configuration for STF" . For more information about API health monitoring, see Section 6.8, "Red Hat OpenStack Platform API status and containerized services health" . For more information about RHOSP service monitoring, see Section 6.7, "Resource usage of Red Hat OpenStack Platform services" . Virtual machine view dashboard Use the virtual machine view dashboard to view panels to monitor virtual machine infrastructure usage. Select a cloud and project from the upper left corner of the dashboard. You must enable event storage if you want to enable the event annotations on this dashboard. For more information, see Section 3.2, "Creating a ServiceTelemetry object in Red Hat OpenShift Container Platform" . Memcached view dashboard Use the memcached view dashboard to view panels to monitor connections, availability, system metrics and cache performance. Select a cloud from the upper left corner of the dashboard. 6.1.1. Configuring Grafana to host the dashboard Grafana is not included in the default Service Telemetry Framework (STF) deployment, so you must deploy the Grafana Operator from community-operators CatalogSource. If you use the Service Telemetry Operator to deploy Grafana, it results in a Grafana instance and the configuration of the default data sources for the local STF deployment. Procedure Log in to your Red Hat OpenShift Container Platform environment where STF is hosted. Subscribe to the Grafana Operator by using the community-operators CatalogSource: Warning Community Operators are Operators which have not been vetted or verified by Red Hat. Community Operators should be used with caution because their stability is unknown. Red Hat provides no support for community Operators. Learn more about Red Hat's third party software support policy USD oc apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/grafana-operator.openshift-operators: "" name: grafana-operator namespace: openshift-operators spec: channel: v5 installPlanApproval: Automatic name: grafana-operator source: community-operators sourceNamespace: openshift-marketplace EOF Verify that the Operator launched successfully. In the command output, if the value of the PHASE column is Succeeded , the Operator launched successfully: USD oc wait --for jsonpath="{.status.phase}"=Succeeded csv --namespace openshift-operators -l operators.coreos.com/grafana-operator.openshift-operators clusterserviceversion.operators.coreos.com/grafana-operator.v5.6.0 condition met To launch a Grafana instance, create or modify the ServiceTelemetry object. Set graphing.enabled and graphing.grafana.ingressEnabled to true . Optionally, set the value of graphing.grafana.baseImage to the Grafana workload container image that will be deployed: USD oc edit stf default apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry ... spec: ... graphing: enabled: true grafana: ingressEnabled: true baseImage: 'registry.redhat.io/rhel8/grafana:9' Verify that the Grafana instance deployed: USD oc wait --for jsonpath="{.status.phase}"=Running pod -l app=default-grafana --timeout=600s pod/default-grafana-deployment-669968df64-wz5s2 condition met Verify that the Grafana data sources installed correctly: USD oc get grafanadatasources.grafana.integreatly.org NAME NO MATCHING INSTANCES LAST RESYNC AGE default-ds-stf-prometheus 2m35s 2m56s Verify that the Grafana route exists: USD oc get route default-grafana-route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD default-grafana-route default-grafana-route-service-telemetry.apps.infra.watch default-grafana-service web reencrypt None 6.1.2. Enabling dashboards The Grafana Operator can import and manage dashboards by creating GrafanaDashboard objects. Service Telemetry Operator can enable a set of default dashboards that create the GrafanaDashboard objects that load dashboards into the Grafana instance. Set the value of graphing.grafana.dashboards.enabled to true to load the following dashboards into Grafana : Infrastructure dashboard Cloud view dashboard Virtual machine view dashboard Memcached view dashboard You can use the GrafanaDashboard object to create and load additional dashboards into Grafana. For more information about managing dashboards with Grafana Operator, see Dashboards in the Grafana Operator project documentation . Prerequisites You enabled graphing in the ServiceTelemetry object. For more information about graphing, see Section 6.1.1, "Configuring Grafana to host the dashboard" . Procedure To enable the managed dashboards, create or modify the ServiceTelemetry object. Set graphing.grafana.dashboards.enabled to true : USD oc edit stf default apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry ... spec: ... graphing: enabled: true grafana: dashboards: enabled: true Verify that the Grafana dashboards are created. The process of Service Telemetry Operator creating the dashboards might take some time. USD oc get grafanadashboards.grafana.integreatly.org NAME NO MATCHING INSTANCES LAST RESYNC AGE memcached-dashboard-1 38s 38s rhos-cloud-dashboard-1 39s 39s rhos-dashboard-1 39s 39s virtual-machine-dashboard-1 37s 37s Retrieve the Grafana route address: USD oc get route default-grafana-route -ojsonpath='{.spec.host}' default-grafana-route-service-telemetry.apps.infra.watch In a web browser, navigate to https:// <grafana_route_address> . Replace <grafana_route_address> with the value that you retrieved in the step. Log in with OpenShift credentials. For more information about logging in, see Section 3.3, "Accessing user interfaces for STF components" . To view the dashboard, click Dashboards and Browse . The managed dashboards are available in the service-telemetry folder. 6.1.3. Connecting an external dashboard system It is possible to configure third-party visualization tools to connect to the STF Prometheus for metrics retrieval. Access is controlled via an OAuth token, and a ServiceAccount is already created that has (only) the required permissions. A new OAuth token can be generated against this account for the external system to use. To use the authentication token, the third-party tool must be configured to supply an HTTP Bearer Token Authorization header as described in RFC6750. Consult the documentation of the third-party tool for how to configure this header. For example Configure Prometheus - Custom HTTP Headers in the Grafana Documentation . Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Create a new token secret for the stf-prometheus-reader service account USD oc create -f - <<EOF apiVersion: v1 kind: Secret metadata: name: my-prometheus-reader-token namespace: service-telemetry annotations: kubernetes.io/service-account.name: stf-prometheus-reader type: kubernetes.io/service-account-token EOF Retrieve the token from the secret USD TOKEN=USD(oc get secret my-prometheus-reader-token -o template='{{.data.token}}' | base64 -d) Retrieve the Prometheus host name USD PROM_HOST=USD(oc get route default-prometheus-proxy -ogo-template='{{ .spec.host }}') Test the access token USD curl -k -H "Authorization: Bearer USD{TOKEN}" https://USD{PROM_HOST}/api/v1/query?query=up {"status":"success",[...] Configure your third-party tool with the PROM_HOST and TOKEN values from above USD echo USDPROM_HOST USD echo USDTOKEN The token remains valid as long as the secret exists. You can revoke the token by deleting the secret. USD oc delete secret my-prometheus-reader-token secret "my-prometheus-reader-token" deleted Additional information For more information about service account token secrets, see Creating a service account token secret in the OpenShift Container Platform Documentation . 6.2. Metrics retention time period in Service Telemetry Framework The default retention time for metrics stored in Service Telemetry Framework (STF) is 24 hours, which provides enough data for trends to develop for the purposes of alerting. For long-term storage, use systems designed for long-term data retention, for example, Thanos. Additional resources To adjust STF for additional metrics retention time, see Section 6.2.1, "Editing the metrics retention time period in Service Telemetry Framework" . For recommendations about Prometheus data storage and estimating storage space, see https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects For more information about Thanos, see https://thanos.io/ 6.2.1. Editing the metrics retention time period in Service Telemetry Framework You can adjust Service Telemetry Framework (STF) for additional metrics retention time. Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Edit the ServiceTelemetry object: USD oc edit stf default Add retention: 7d to the storage section of backends.metrics.prometheus.storage to increase the retention period to seven days: Note If you set a long retention period, retrieving data from heavily populated Prometheus systems can result in queries returning results slowly. apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: ... backends: metrics: prometheus: enabled: true storage: strategy: persistent retention: 7d ... Save your changes and close the object. Wait for prometheus to restart with the new settings. USD oc get po -l app.kubernetes.io/name=prometheus -w Verify the new retention setting by checking the command line arguments used in the pod. USD oc describe po prometheus-default-0 | grep retention.time --storage.tsdb.retention.time=24h Additional resources For more information about the metrics retention time, see Section 6.2, "Metrics retention time period in Service Telemetry Framework" . 6.3. Alerts in Service Telemetry Framework You create alert rules in Prometheus and alert routes in Alertmanager. Alert rules in Prometheus servers send alerts to an Alertmanager, which manages the alerts. Alertmanager can silence, inhibit, or aggregate alerts, and send notifications by using email, on-call notification systems, or chat platforms. To create an alert, complete the following tasks: Create an alert rule in Prometheus. For more information, see Section 6.3.1, "Creating an alert rule in Prometheus" . Create an alert route in Alertmanager. There are two ways in which you can create an alert route: Creating a standard alert route in Alertmanager . Creating an alert route with templating in Alertmanager . Additional resources For more information about alerts or notifications with Prometheus and Alertmanager, see https://prometheus.io/docs/alerting/overview/ To view an example set of alerts that you can use with Service Telemetry Framework (STF), see https://github.com/infrawatch/service-telemetry-operator/tree/master/deploy/alerts 6.3.1. Creating an alert rule in Prometheus Prometheus evaluates alert rules to trigger notifications. If the rule condition returns an empty result set, the condition is false. Otherwise, the rule is true and it triggers an alert. Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Create a PrometheusRule object that contains the alert rule. The Prometheus Operator loads the rule into Prometheus: USD oc apply -f - <<EOF apiVersion: monitoring.rhobs/v1 kind: PrometheusRule metadata: creationTimestamp: null labels: prometheus: default role: alert-rules name: prometheus-alarm-rules namespace: service-telemetry spec: groups: - name: ./openstack.rules rules: - alert: Collectd metrics receive rate is zero expr: rate(sg_total_collectd_msg_received_count[1m]) == 0 EOF To change the rule, edit the value of the expr parameter. To verify that the Operator loaded the rules into Prometheus, run the curl command against the default-prometheus-proxy route with basic authentication: USD curl -k -H "Authorization: Bearer USD(oc create token stf-prometheus-reader)" https://USD(oc get route default-prometheus-proxy -ogo-template='{{ .spec.host }}')/api/v1/rules {"status":"success","data":{"groups":[{"name":"./openstack.rules","file":"/etc/prometheus/rules/prometheus-default-rulefiles-0/service-telemetry-prometheus-alarm-rules.yaml","rules":[{"state":"inactive","name":"Collectd metrics receive count is zero","query":"rate(sg_total_collectd_msg_received_count[1m]) == 0","duration":0,"labels":{},"annotations":{},"alerts":[],"health":"ok","evaluationTime":0.00034627,"lastEvaluation":"2021-12-07T17:23:22.160448028Z","type":"alerting"}],"interval":30,"evaluationTime":0.000353787,"lastEvaluation":"2021-12-07T17:23:22.160444017Z"}]}} Additional resources For more information on alerting, see https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/alerting.md 6.3.2. Configuring custom alerts You can add custom alerts to the PrometheusRule object that you created in Section 6.3.1, "Creating an alert rule in Prometheus" . Procedure Use the oc edit command: USD oc edit prometheusrules.monitoring.rhobs prometheus-alarm-rules Edit the PrometheusRules manifest. Save and close the manifest. Additional resources For more information about how to configure alerting rules, see https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ . For more information about PrometheusRules objects, see https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/alerting.md 6.3.3. Creating a standard alert route in Alertmanager Use Alertmanager to deliver alerts to an external system, such as email, IRC, or other notification channel. The Prometheus Operator manages the Alertmanager configuration as a Red Hat OpenShift Container Platform secret. By default, Service Telemetry Framework (STF) deploys a basic configuration that results in no receivers: alertmanager.yaml: |- global: resolve_timeout: 5m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' receivers: - name: 'null' To deploy a custom Alertmanager route with STF, you must add a alertmanagerConfigManifest parameter to the Service Telemetry Operator that results in an updated secret, managed by the Prometheus Operator. Note If your alertmanagerConfigManifest contains a custom template, for example, to construct the title and text of the sent alert, you must deploy the contents of the alertmanagerConfigManifest using a base64-encoded configuration. For more information, see Section 6.3.4, "Creating an alert route with templating in Alertmanager" . Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Edit the ServiceTelemetry object for your STF deployment: USD oc edit stf default Add the new parameter alertmanagerConfigManifest and the Secret object contents to define the alertmanager.yaml configuration for Alertmanager: Note This step loads the default template that the Service Telemetry Operator manages. To verify that the changes are populating correctly, change a value, return the alertmanager-default secret, and verify that the new value is loaded into memory. For example, change the value of the parameter global.resolve_timeout from 5m to 10m . apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: backends: metrics: prometheus: enabled: true alertmanagerConfigManifest: | apiVersion: v1 kind: Secret metadata: name: 'alertmanager-default' namespace: 'service-telemetry' type: Opaque stringData: alertmanager.yaml: |- global: resolve_timeout: 10m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' receivers: - name: 'null' Verify that the configuration has been applied to the secret: USD oc get secret alertmanager-default -o go-template='{{index .data "alertmanager.yaml" | base64decode }}' global: resolve_timeout: 10m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' receivers: - name: 'null' Run the wget command from the prometheus pod against the alertmanager-proxy service to retrieve the status and configYAML contents, and verify that the supplied configuration matches the configuration in Alertmanager: USD oc exec -it prometheus-default-0 -c prometheus -- sh -c "wget --header \"Authorization: Bearer \USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://default-alertmanager-proxy:9095/api/v1/status -q -O -" {"status":"success","data":{"configYAML":"...",...}} Verify that the configYAML field contains the changes you expect. Additional resources For more information about the Red Hat OpenShift Container Platform secret and the Prometheus operator, see Prometheus user guide on alerting . 6.3.4. Creating an alert route with templating in Alertmanager Use Alertmanager to deliver alerts to an external system, such as email, IRC, or other notification channel. The Prometheus Operator manages the Alertmanager configuration as a Red Hat OpenShift Container Platform secret. By default, Service Telemetry Framework (STF) deploys a basic configuration that results in no receivers: alertmanager.yaml: |- global: resolve_timeout: 5m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' receivers: - name: 'null' If the alertmanagerConfigManifest parameter contains a custom template, for example, to construct the title and text of the sent alert, you must deploy the contents of the alertmanagerConfigManifest by using a base64-encoded configuration. Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Create the necessary alertmanager config in a file called alertmanager.yaml, for example: USD cat > alertmanager.yaml <<EOF global: resolve_timeout: 10m slack_api_url: <slack_api_url> receivers: - name: slack slack_configs: - channel: #stf-alerts title: |- ... text: >- ... route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'slack' EOF Generate the config manifest and add it to the ServiceTelemetry object for your STF deployment: USD CONFIG_MANIFEST=USD(oc create secret --dry-run=client generic alertmanager-default --from-file=alertmanager.yaml -o json) USD oc patch stf default --type=merge -p '{"spec":{"alertmanagerConfigManifest":'"USDCONFIG_MANIFEST"'}}' Verify that the configuration has been applied to the secret: Note There will be a short delay as the operators update each object USD oc get secret alertmanager-default -o go-template='{{index .data "alertmanager.yaml" | base64decode }}' global: resolve_timeout: 10m slack_api_url: <slack_api_url> receivers: - name: slack slack_configs: - channel: #stf-alerts title: |- ... text: >- ... route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'slack' Run the wget command from the prometheus pod against the alertmanager-proxy service to retrieve the status and configYAML contents, and verify that the supplied configuration matches the configuration in Alertmanager: USD oc exec -it prometheus-default-0 -c prometheus -- /bin/sh -c "wget --header \"Authorization: Bearer \USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://default-alertmanager-proxy:9095/api/v1/status -q -O -" {"status":"success","data":{"configYAML":"...",...}} Verify that the configYAML field contains the changes you expect. Additional resources For more information about the Red Hat OpenShift Container Platform secret and the Prometheus operator, see Prometheus user guide on alerting . 6.4. Sending alerts as SNMP traps To enable SNMP traps, modify the ServiceTelemetry object and configure the snmpTraps parameters. SNMP traps are sent using version 2c. 6.4.1. Configuration parameters for snmpTraps The snmpTraps parameter contains the following sub-parameters for configuring the alert receiver: enabled Set the value of this sub-parameter to true to enable the SNMP trap alert receiver. The default value is false. target Target address to send SNMP traps. Value is a string. Default is 192.168.24.254 . port Target port to send SNMP traps. Value is an integer. Default is 162 . community Target community to send SNMP traps to. Value is a string. Default is public . retries SNMP trap retry delivery limit. Value is an integer. Default is 5 . timeout SNMP trap delivery timeout defined in seconds. Value is an integer. Default is 1 . alertOidLabel Label name in the alert that defines the OID value to send the SNMP trap as. Value is a string. Default is oid . trapOidPrefix SNMP trap OID prefix for variable bindings. Value is a string. Default is 1.3.6.1.4.1.50495.15 . trapDefaultOid SNMP trap OID when no alert OID label has been specified with the alert. Value is a string. Default is 1.3.6.1.4.1.50495.15.1.2.1 . trapDefaultSeverity SNMP trap severity when no alert severity has been set. Value is a string. Defaults to an empty string. Configure the snmpTraps parameter as part of the alerting.alertmanager.receivers definition in the ServiceTelemetry object: apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: alerting: alertmanager: receivers: snmpTraps: alertOidLabel: oid community: public enabled: true port: 162 retries: 5 target: 192.168.25.254 timeout: 1 trapDefaultOid: 1.3.6.1.4.1.50495.15.1.2.1 trapDefaultSeverity: "" trapOidPrefix: 1.3.6.1.4.1.50495.15 ... 6.4.2. Overview of the MIB definition Delivery of SNMP traps uses object identifier (OID) value 1.3.6.1.4.1.50495.15.1.2.1 by default. The management information base (MIB) schema is available at https://github.com/infrawatch/prometheus-webhook-snmp/blob/master/PROMETHEUS-ALERT-CEPH-MIB.txt . The OID number is comprised of the following component values: * The value 1.3.6.1.4.1 is a global OID defined for private enterprises. * The identifier 50495 is a private enterprise number assigned by IANA for the Ceph organization. * The other values are child OIDs of the parent. 15 prometheus objects 15.1 prometheus alerts 15.1.2 prometheus alert traps 15.1.2.1 prometheus alert trap default The prometheus alert trap default is an object comprised of several other sub-objects to OID 1.3.6.1.4.1.50495.15 which is defined by the alerting.alertmanager.receivers.snmpTraps.trapOidPrefix parameter: <trapOidPrefix>.1.1.1 alert name <trapOidPrefix>.1.1.2 status <trapOidPrefix>.1.1.3 severity <trapOidPrefix>.1.1.4 instance <trapOidPrefix>.1.1.5 job <trapOidPrefix>.1.1.6 description <trapOidPrefix>.1.1.7 labels <trapOidPrefix>.1.1.8 timestamp <trapOidPrefix>.1.1.9 rawdata The following is example output from a simple SNMP trap receiver that outputs the received trap to the console: 6.4.3. Configuring SNMP traps Prerequisites Ensure that you know the IP address or hostname of the SNMP trap receiver where you want to send the alerts to. Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry To enable SNMP traps, modify the ServiceTelemetry object: USD oc edit stf default Set the alerting.alertmanager.receivers.snmpTraps parameters: apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry ... spec: ... alerting: alertmanager: receivers: snmpTraps: enabled: true target: 10.10.10.10 Ensure that you set the value of target to the IP address or hostname of the SNMP trap receiver. Additional Information For more information about available parameters for snmpTraps , see Section 6.4.1, "Configuration parameters for snmpTraps" . 6.4.4. Creating alerts for SNMP traps You can create alerts that are configured for delivery by SNMP traps by adding labels that are parsed by the prometheus-webhook-snmp middleware to define the trap information and delivered object identifiers (OID). Adding the oid or severity labels is only required if you need to change the default values for a particular alert definition. Note When you set the oid label, the top-level SNMP trap OID changes, but the sub-OIDs remain defined by the global trapOidPrefix value plus the child OID values .1.1.1 through .1.1.9 . For more information about the MIB definition, see Section 6.4.2, "Overview of the MIB definition" . Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Create a PrometheusRule object that contains the alert rule and an oid label that contains the SNMP trap OID override value: USD oc apply -f - <<EOF apiVersion: monitoring.rhobs/v1 kind: PrometheusRule metadata: creationTimestamp: null labels: prometheus: default role: alert-rules name: prometheus-alarm-rules-snmp namespace: service-telemetry spec: groups: - name: ./openstack.rules rules: - alert: Collectd metrics receive rate is zero expr: rate(sg_total_collectd_msg_received_count[1m]) == 0 labels: oid: 1.3.6.1.4.1.50495.15.1.2.1 severity: critical EOF Additional information For more information about configuring alerts, see Section 6.3, "Alerts in Service Telemetry Framework" . 6.5. High availability Warning STF high availability (HA) mode is deprecated and is not supported in production environments. Red Hat OpenShift Container Platform is a highly-available platform, and you can cause issues and complicate debugging in STF if you enable HA mode. With high availability, Service Telemetry Framework (STF) can rapidly recover from failures in its component services. Although Red Hat OpenShift Container Platform restarts a failed pod if nodes are available to schedule the workload, this recovery process might take more than one minute, during which time events and metrics are lost. A high availability configuration includes multiple copies of STF components, which reduces recovery time to approximately 2 seconds. To protect against failure of an Red Hat OpenShift Container Platform node, deploy STF to an Red Hat OpenShift Container Platform cluster with three or more nodes. Enabling high availability has the following effects: The following components run two pods instead of the default one: AMQ Interconnect Alertmanager Prometheus Events Smart Gateway Metrics Smart Gateway Recovery time from a lost pod in any of these services reduces to approximately 2 seconds. 6.5.1. Configuring high availability To configure Service Telemetry Framework (STF) for high availability, add highAvailability.enabled: true to the ServiceTelemetry object in Red Hat OpenShift Container Platform. You can set this parameter at installation time or, if you already deployed STF, complete the following steps: Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Use the oc command to edit the ServiceTelemetry object: USD oc edit stf default Add highAvailability.enabled: true to the spec section: apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry ... spec: ... highAvailability: enabled: true Save your changes and close the object. 6.6. Observability Strategy in Service Telemetry Framework Service Telemetry Framework (STF) does not include event storage backends or dashboarding tools. STF can optionally create datasource configurations for Grafana using the community operator to provide a dashboarding interface. Instead of having Service Telemetry Operator create custom resource requests, you can use your own deployments of these applications or other compatible applications, and scrape the metrics Smart Gateways for delivery to your own Prometheus-compatible system for telemetry storage. If you set the observabilityStrategy to none , then storage backends will not be deployed so persistent storage will not be required by STF. Use the observabilityStrategy property on the STF object to specify which type of observability components will be deployed. The following values are available: value meaning use_redhat Red Hat supported components are requested by STF. This includes Prometheus and Alertmanager from the Cluster Observability Operator, but no resource requests to Elastic Cloud on Kubernetes (ECK) Operator. If enabled, resources are also requested from the Grafana Operator (community component). use_hybrid In addition to the Red Hat supported components, Elasticsearch and Grafana resources are also requested (if specified in the ServiceTelemetry object) use_community The community version of Prometheus Operator is used instead of Cluster Observability Operator. Elasticsearch and Grafana resources are also requested (if specified in the ServiceTelemetry object) none No storage or alerting components are deployed Note Newly deployed STF environments as of 1.5.3 default to use_redhat . Existing STF deployments created before 1.5.3 default to use_community . To migrate an existing STF deployment to use_redhat , see the Red Hat Knowledge Base article Migrating Service Telemetry Framework to fully supported operators . 6.6.1. Configuring an alternate observability strategy To skip the deployment of storage, visualization, and alerting backends, add observabilityStrategy: none to the ServiceTelemetry spec. In this mode, you only deploy AMQ Interconnect routers and Smart Gateways, and you must configure an external Prometheus-compatible system to collect metrics from the STF Smart Gateways, and an external Elasticsearch to receive the forwarded events. Procedure Create a ServiceTelemetry object with the property observabilityStrategy: none in the spec parameter. The manifest shows results in a default deployment of STF that is suitable for receiving telemetry from a single cloud with all metrics collector types. USD oc apply -f - <<EOF apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: observabilityStrategy: none EOF Delete the remaining objects that are managed by community operators USD for o in alertmanagers.monitoring.rhobs/default prometheuses.monitoring.rhobs/default elasticsearch/elasticsearch grafana/default-grafana; do oc delete USDo; done To verify that all workloads are operating correctly, view the pods and the status of each pod: USD oc get pods NAME READY STATUS RESTARTS AGE default-cloud1-ceil-event-smartgateway-6f8547df6c-p2db5 3/3 Running 0 132m default-cloud1-ceil-meter-smartgateway-59c845d65b-gzhcs 3/3 Running 0 132m default-cloud1-coll-event-smartgateway-bf859f8d77-tzb66 3/3 Running 0 132m default-cloud1-coll-meter-smartgateway-75bbd948b9-d5phm 3/3 Running 0 132m default-cloud1-sens-meter-smartgateway-7fdbb57b6d-dh2g9 3/3 Running 0 132m default-interconnect-668d5bbcd6-57b2l 1/1 Running 0 132m interconnect-operator-b8f5bb647-tlp5t 1/1 Running 0 47h service-telemetry-operator-566b9dd695-wkvjq 1/1 Running 0 156m smart-gateway-operator-58d77dcf7-6xsq7 1/1 Running 0 47h Additional resources For more information about configuring additional clouds or to change the set of supported collectors, see Section 4.3.2, "Deploying Smart Gateways" . To migrate an existing STF deployment to use_redhat , see the Red Hat Knowledge Base article Migrating Service Telemetry Framework to fully supported operators . 6.7. Resource usage of Red Hat OpenStack Platform services You can monitor the resource usage of the Red Hat OpenStack Platform (RHOSP) services, such as the APIs and other infrastructure processes, to identify bottlenecks in the overcloud by showing services that run out of compute power. Resource usage monitoring is enabled by default. Additional resources To disable resource usage monitoring, see Section 6.7.1, "Disabling resource usage monitoring of Red Hat OpenStack Platform services" . 6.7.1. Disabling resource usage monitoring of Red Hat OpenStack Platform services To disable the monitoring of RHOSP containerized service resource usage, you must set the CollectdEnableLibpodstats parameter to false . Prerequisites You have created the stf-connectors.yaml file. For more information, see Section 4.1, "Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director" . You are using the most current version of Red Hat OpenStack Platform (RHOSP) 16.2. Procedure Open the stf-connectors.yaml file and add the CollectdEnableLibpodstats parameter to override the setting in enable-stf.yaml . Ensure that stf-connectors.yaml is called from the openstack overcloud deploy command after enable-stf.yaml : CollectdEnableLibpodstats: false Continue with the overcloud deployment procedure. For more information, see Section 4.1.5, "Deploying the overcloud" . 6.8. Red Hat OpenStack Platform API status and containerized services health You can use the OCI (Open Container Initiative) standard to assess the container health status of each Red Hat OpenStack Platform (RHOSP) service by periodically running a health check script. Most RHOSP services implement a health check that logs issues and returns a binary status. For the RHOSP APIs, the health checks query the root endpoint and determine the health based on the response time. Monitoring of RHOSP container health and API status is enabled by default. Additional resources To disable RHOSP container health and API status monitoring, see Section 6.8.1, "Disabling container health and API status monitoring" . 6.8.1. Disabling container health and API status monitoring To disable RHOSP containerized service health and API status monitoring, you must set the CollectdEnableSensubility parameter to false . Prerequisites You have created the stf-connectors.yaml file in your templates directory. For more information, see Section 4.1, "Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director" . You are using the most current version of Red Hat OpenStack Platform (RHOSP) 16.2. Procedure Open the stf-connectors.yaml and add the CollectdEnableSensubility parameter to override the setting in enable-stf.yaml . Ensure that stf-connectors.yaml is called from the openstack overcloud deploy command after enable-stf.yaml : CollectdEnableSensubility: false Continue with the overcloud deployment procedure. For more information, see Section 4.1.5, "Deploying the overcloud" . Additional resources For more information about multiple cloud addresses, see Section 4.3, "Configuring multiple clouds" .
[ "oc apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/grafana-operator.openshift-operators: \"\" name: grafana-operator namespace: openshift-operators spec: channel: v5 installPlanApproval: Automatic name: grafana-operator source: community-operators sourceNamespace: openshift-marketplace EOF", "oc wait --for jsonpath=\"{.status.phase}\"=Succeeded csv --namespace openshift-operators -l operators.coreos.com/grafana-operator.openshift-operators clusterserviceversion.operators.coreos.com/grafana-operator.v5.6.0 condition met", "oc edit stf default apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry spec: graphing: enabled: true grafana: ingressEnabled: true baseImage: 'registry.redhat.io/rhel8/grafana:9'", "oc wait --for jsonpath=\"{.status.phase}\"=Running pod -l app=default-grafana --timeout=600s pod/default-grafana-deployment-669968df64-wz5s2 condition met", "oc get grafanadatasources.grafana.integreatly.org NAME NO MATCHING INSTANCES LAST RESYNC AGE default-ds-stf-prometheus 2m35s 2m56s", "oc get route default-grafana-route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD default-grafana-route default-grafana-route-service-telemetry.apps.infra.watch default-grafana-service web reencrypt None", "oc edit stf default apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry spec: graphing: enabled: true grafana: dashboards: enabled: true", "oc get grafanadashboards.grafana.integreatly.org NAME NO MATCHING INSTANCES LAST RESYNC AGE memcached-dashboard-1 38s 38s rhos-cloud-dashboard-1 39s 39s rhos-dashboard-1 39s 39s virtual-machine-dashboard-1 37s 37s", "oc get route default-grafana-route -ojsonpath='{.spec.host}' default-grafana-route-service-telemetry.apps.infra.watch", "oc project service-telemetry", "oc create -f - <<EOF apiVersion: v1 kind: Secret metadata: name: my-prometheus-reader-token namespace: service-telemetry annotations: kubernetes.io/service-account.name: stf-prometheus-reader type: kubernetes.io/service-account-token EOF", "TOKEN=USD(oc get secret my-prometheus-reader-token -o template='{{.data.token}}' | base64 -d)", "PROM_HOST=USD(oc get route default-prometheus-proxy -ogo-template='{{ .spec.host }}')", "curl -k -H \"Authorization: Bearer USD{TOKEN}\" https://USD{PROM_HOST}/api/v1/query?query=up {\"status\":\"success\",[...]", "echo USDPROM_HOST echo USDTOKEN", "oc delete secret my-prometheus-reader-token secret \"my-prometheus-reader-token\" deleted", "oc project service-telemetry", "oc edit stf default", "apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: backends: metrics: prometheus: enabled: true storage: strategy: persistent retention: 7d", "oc get po -l app.kubernetes.io/name=prometheus -w", "oc describe po prometheus-default-0 | grep retention.time --storage.tsdb.retention.time=24h", "oc project service-telemetry", "oc apply -f - <<EOF apiVersion: monitoring.rhobs/v1 kind: PrometheusRule metadata: creationTimestamp: null labels: prometheus: default role: alert-rules name: prometheus-alarm-rules namespace: service-telemetry spec: groups: - name: ./openstack.rules rules: - alert: Collectd metrics receive rate is zero expr: rate(sg_total_collectd_msg_received_count[1m]) == 0 EOF", "curl -k -H \"Authorization: Bearer USD(oc create token stf-prometheus-reader)\" https://USD(oc get route default-prometheus-proxy -ogo-template='{{ .spec.host }}')/api/v1/rules {\"status\":\"success\",\"data\":{\"groups\":[{\"name\":\"./openstack.rules\",\"file\":\"/etc/prometheus/rules/prometheus-default-rulefiles-0/service-telemetry-prometheus-alarm-rules.yaml\",\"rules\":[{\"state\":\"inactive\",\"name\":\"Collectd metrics receive count is zero\",\"query\":\"rate(sg_total_collectd_msg_received_count[1m]) == 0\",\"duration\":0,\"labels\":{},\"annotations\":{},\"alerts\":[],\"health\":\"ok\",\"evaluationTime\":0.00034627,\"lastEvaluation\":\"2021-12-07T17:23:22.160448028Z\",\"type\":\"alerting\"}],\"interval\":30,\"evaluationTime\":0.000353787,\"lastEvaluation\":\"2021-12-07T17:23:22.160444017Z\"}]}}", "oc edit prometheusrules.monitoring.rhobs prometheus-alarm-rules", "alertmanager.yaml: |- global: resolve_timeout: 5m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' receivers: - name: 'null'", "oc project service-telemetry", "oc edit stf default", "apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: backends: metrics: prometheus: enabled: true alertmanagerConfigManifest: | apiVersion: v1 kind: Secret metadata: name: 'alertmanager-default' namespace: 'service-telemetry' type: Opaque stringData: alertmanager.yaml: |- global: resolve_timeout: 10m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' receivers: - name: 'null'", "oc get secret alertmanager-default -o go-template='{{index .data \"alertmanager.yaml\" | base64decode }}' global: resolve_timeout: 10m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' receivers: - name: 'null'", "oc exec -it prometheus-default-0 -c prometheus -- sh -c \"wget --header \\\"Authorization: Bearer \\USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\\\" https://default-alertmanager-proxy:9095/api/v1/status -q -O -\" {\"status\":\"success\",\"data\":{\"configYAML\":\"...\",...}}", "alertmanager.yaml: |- global: resolve_timeout: 5m route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' receivers: - name: 'null'", "oc project service-telemetry", "cat > alertmanager.yaml <<EOF global: resolve_timeout: 10m slack_api_url: <slack_api_url> receivers: - name: slack slack_configs: - channel: #stf-alerts title: |- text: >- route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'slack' EOF", "CONFIG_MANIFEST=USD(oc create secret --dry-run=client generic alertmanager-default --from-file=alertmanager.yaml -o json) oc patch stf default --type=merge -p '{\"spec\":{\"alertmanagerConfigManifest\":'\"USDCONFIG_MANIFEST\"'}}'", "oc get secret alertmanager-default -o go-template='{{index .data \"alertmanager.yaml\" | base64decode }}' global: resolve_timeout: 10m slack_api_url: <slack_api_url> receivers: - name: slack slack_configs: - channel: #stf-alerts title: |- text: >- route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'slack'", "oc exec -it prometheus-default-0 -c prometheus -- /bin/sh -c \"wget --header \\\"Authorization: Bearer \\USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\\\" https://default-alertmanager-proxy:9095/api/v1/status -q -O -\" {\"status\":\"success\",\"data\":{\"configYAML\":\"...\",...}}", "apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: alerting: alertmanager: receivers: snmpTraps: alertOidLabel: oid community: public enabled: true port: 162 retries: 5 target: 192.168.25.254 timeout: 1 trapDefaultOid: 1.3.6.1.4.1.50495.15.1.2.1 trapDefaultSeverity: \"\" trapOidPrefix: 1.3.6.1.4.1.50495.15", "SNMPv2-MIB::snmpTrapOID.0 = OID: SNMPv2-SMI::enterprises.50495.15.1.2.1 SNMPv2-SMI::enterprises.50495.15.1.1.1 = STRING: \"TEST ALERT FROM PROMETHEUS PLEASE ACKNOWLEDGE\" SNMPv2-SMI::enterprises.50495.15.1.1.2 = STRING: \"firing\" SNMPv2-SMI::enterprises.50495.15.1.1.3 = STRING: \"warning\" SNMPv2-SMI::enterprises.50495.15.1.1.4 = \"\" SNMPv2-SMI::enterprises.50495.15.1.1.5 = \"\" SNMPv2-SMI::enterprises.50495.15.1.1.6 = STRING: \"TEST ALERT FROM \" SNMPv2-SMI::enterprises.50495.15.1.1.7 = STRING: \"{\\\"cluster\\\": \\\"TEST\\\", \\\"container\\\": \\\"sg-core\\\", \\\"endpoint\\\": \\\"prom-https\\\", \\\"prometheus\\\": \\\"service-telemetry/default\\\", \\\"service\\\": \\\"default-cloud1-coll-meter\\\", \\\"source\\\": \\\"SG\\\"}\" SNMPv2-SMI::enterprises.50495.15.1.1.8 = Timeticks: (1676476389) 194 days, 0:52:43.89 SNMPv2-SMI::enterprises.50495.15.1.1.9 = STRING: \"{\\\"status\\\": \\\"firing\\\", \\\"labels\\\": {\\\"cluster\\\": \\\"TEST\\\", \\\"container\\\": \\\"sg-core\\\", \\\"endpoint\\\": \\\"prom-https\\\", \\\"prometheus\\\": \\\"service-telemetry/default\\\", \\\"service\\\": \\\"default-cloud1-coll-meter\\\", \\\"source\\\": \\\"SG\\\"}, \\\"annotations\\\": {\\\"action\\\": \\\"TESTING PLEASE ACKNOWLEDGE, NO FURTHER ACTION REQUIRED ONLY A TEST\\\"}, \\\"startsAt\\\": \\\"2023-02-15T15:53:09.109Z\\\", \\\"endsAt\\\": \\\"0001-01-01T00:00:00Z\\\", \\\"generatorURL\\\": \\\"http://prometheus-default-0:9090/graph?g0.expr=sg_total_collectd_msg_received_count+%3E+1&g0.tab=1\\\", \\\"fingerprint\\\": \\\"feefeb77c577a02f\\\"}\"", "oc project service-telemetry", "oc edit stf default", "apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry spec: alerting: alertmanager: receivers: snmpTraps: enabled: true target: 10.10.10.10", "oc project service-telemetry", "oc apply -f - <<EOF apiVersion: monitoring.rhobs/v1 kind: PrometheusRule metadata: creationTimestamp: null labels: prometheus: default role: alert-rules name: prometheus-alarm-rules-snmp namespace: service-telemetry spec: groups: - name: ./openstack.rules rules: - alert: Collectd metrics receive rate is zero expr: rate(sg_total_collectd_msg_received_count[1m]) == 0 labels: oid: 1.3.6.1.4.1.50495.15.1.2.1 severity: critical EOF", "oc project service-telemetry", "oc edit stf default", "apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry spec: highAvailability: enabled: true", "oc apply -f - <<EOF apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: observabilityStrategy: none EOF", "for o in alertmanagers.monitoring.rhobs/default prometheuses.monitoring.rhobs/default elasticsearch/elasticsearch grafana/default-grafana; do oc delete USDo; done", "oc get pods NAME READY STATUS RESTARTS AGE default-cloud1-ceil-event-smartgateway-6f8547df6c-p2db5 3/3 Running 0 132m default-cloud1-ceil-meter-smartgateway-59c845d65b-gzhcs 3/3 Running 0 132m default-cloud1-coll-event-smartgateway-bf859f8d77-tzb66 3/3 Running 0 132m default-cloud1-coll-meter-smartgateway-75bbd948b9-d5phm 3/3 Running 0 132m default-cloud1-sens-meter-smartgateway-7fdbb57b6d-dh2g9 3/3 Running 0 132m default-interconnect-668d5bbcd6-57b2l 1/1 Running 0 132m interconnect-operator-b8f5bb647-tlp5t 1/1 Running 0 47h service-telemetry-operator-566b9dd695-wkvjq 1/1 Running 0 156m smart-gateway-operator-58d77dcf7-6xsq7 1/1 Running 0 47h", "CollectdEnableLibpodstats: false", "CollectdEnableSensubility: false" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/service_telemetry_framework_1.5/assembly-advanced-features_assembly
7.53. fcoe-target-utils
7.53. fcoe-target-utils 7.53.1. RHBA-2013:0457 - fcoe-target-utils bug fix and enhancement update Updated fcoe-target-utils packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The fcoe-target-utils packages provide a command-line interface for configuring FCoE LUNs (Fibre Channel over Ethernet Logical Unit Numbers) and backstores. Bug Fixes BZ# 819698 Prior to this update, stopping the fcoe-target daemon did not stop the target session when rebooting. This update improves the fcoe-target script and the fcoe-target daemon can now properly shut down the kernel target. BZ# 824227 Prior to this update, a delay in the FCoE interface initialization sometimes resulted in the target configuration not being loaded for that interface. This update permits target configuration for absent interfaces, allowing target and interface configuration in any order. BZ# 837730 Prior to this update, specifying a nonexistent backing file when creating a backstore resulted in the unhelpful Python error "ValueError: No such path". This update reports the error in a more helpful way. BZ#837992 Prior to this update, attempting to remove a storage object in a backstore resulted in a Python error. This update fixes the problem and storage objects can now be removed as expected. BZ# 838442 Prior to this update, attempting to redirect the output of targetcli resulted in a Python error. This update allows targetcli to be successfully redirected. BZ# 846670 Due to a regression, creating a backstore resulted in a Python error. This update allows backstore creation without error. Enhancements BZ# 828096 Prior to this update, backstore size listing abbreviations did not clearly specify between power of 10 (for example Gigabyte) and power of 2 (Gibibyte). This update lists backstore sizes using power-of-2 sizes and labels them as such. BZ# 828681 The caching characteristics of backstores are now exposed via the SCSI Write Cache Enable (WCE) bit to initiators, instead of being set opaquely via the "buffered-mode" backstore setting. The default setting for WCE is "on". All users of fcoe-target-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/fcoe-target-utils
Updating clusters
Updating clusters OpenShift Container Platform 4.9 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team
[ "oc adm upgrade channel <channel>", "oc get apirequestcounts", "NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H cloudcredentials.v1.operator.openshift.io 32 111 ingresses.v1.networking.k8s.io 28 110 ingresses.v1beta1.extensions 1.22 16 66 ingresses.v1beta1.networking.k8s.io 1.22 0 1 installplans.v1alpha1.operators.coreos.com 93 167", "oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!=\"\")]}{.status.removedInRelease}{\"\\t\"}{.metadata.name}{\"\\n\"}{end}'", "1.22 certificatesigningrequests.v1beta1.certificates.k8s.io 1.22 ingresses.v1beta1.extensions 1.22 ingresses.v1beta1.networking.k8s.io", "oc get apirequestcounts <resource>.<version>.<group> -o yaml", "oc get apirequestcounts ingresses.v1beta1.networking.k8s.io -o yaml", "oc get apirequestcounts ingresses.v1beta1.networking.k8s.io -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{\",\"}{.username}{\",\"}{.userAgent}{\"\\n\"}{end}' | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT", "VERBS USERNAME USERAGENT watch bob oc/v4.8.11 watch system:kube-controller-manager cluster-policy-controller/v0.0.0", "oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.8-kube-1.22-api-removals-in-4.9\":\"true\"}}' --type=merge", "oc get mcp", "NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False", "oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'", "oc adm upgrade channel eus-4.10", "oc adm upgrade --to-latest", "Updating to latest version 4.9.18", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.18 True False 6m29s Cluster version is 4.9.18", "oc adm upgrade --to-latest", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.1 True False 6m29s Cluster version is 4.10.1", "oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'", "oc get mcp", "NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False", "oc adm upgrade --clear", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1", "oc get machinehealthcheck -n openshift-machine-api", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.13 True False 158m Cluster version is 4.8.13", "oc get clusterversion -o json|jq \".items[0].spec\"", "{ \"channel\": \"stable-4.9\", \"clusterID\": \"990f7ab8-109b-4c95-8480-2bd1deec55ff\" }", "oc adm upgrade", "Cluster version is 4.8.13 Updates: VERSION IMAGE 4.9.0 quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b", "oc adm upgrade --to-latest=true 1", "oc adm upgrade --to=<version> 1", "oc get clusterversion -o json|jq \".items[0].spec\"", "{ \"channel\": \"stable-4.9\", \"clusterID\": \"990f7ab8-109b-4c95-8480-2bd1deec55ff\", \"desiredUpdate\": { \"force\": false, \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b\", \"version\": \"4.9.0\" 1 } }", "oc get clusterversion -o json|jq \".items[0].status.history\"", "[ { \"completionTime\": null, \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:b8fa13e09d869089fc5957c32b02b7d3792a0b6f36693432acc0409615ab23b7\", \"startedTime\": \"2021-01-28T20:30:50Z\", \"state\": \"Partial\", \"verified\": true, \"version\": \"4.9.0\" }, { \"completionTime\": \"2021-01-28T20:30:50Z\", \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:b8fa13e09d869089fc5957c32b02b7d3792a0b6f36693432acc0409615ab23b7\", \"startedTime\": \"2021-01-28T17:38:10Z\", \"state\": \"Completed\", \"verified\": false, \"version\": \"4.8.13\" } ]", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.0 True False 2m Cluster version is 4.9.0", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.26 True True 24m Unable to apply 4.11.0-rc.7: an unknown error has occurred: MultipleErrors", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.22.1 ip-10-0-170-223.ec2.internal Ready master 82m v1.22.1 ip-10-0-179-95.ec2.internal Ready worker 70m v1.22.1 ip-10-0-182-134.ec2.internal Ready worker 70m v1.22.1 ip-10-0-211-16.ec2.internal Ready master 82m v1.22.1 ip-10-0-250-100.ec2.internal Ready worker 69m v1.22.1", "oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge", "clusterversion.config.openshift.io/version patched", "oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes", "ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm", "oc label node <node name> node-role.kubernetes.io/<custom-label>=", "oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=", "node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: 2 - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3", "oc create -f <file_name>", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m", "oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge", "oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched", "oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge", "oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched", "oc label node <node_name> node-role.kubernetes.io/<custom-label>-", "oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-", "node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled", "USDoc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m", "oc delete mcp <mcp_name>", "--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"", "[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml", "systemctl disable --now firewalld.service", "subscription-manager repos --disable=rhel-7-server-ose-4.8-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=rhel-7-server-ose-4.9-rpms", "yum update openshift-ansible openshift-clients", "subscription-manager repos --disable=rhel-7-server-ose-4.8-rpms --enable=rhel-7-server-ose-4.9-rpms --enable=rhel-7-fast-datapath-rpms --enable=rhel-7-server-optional-rpms", "oc get node", "NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.22.1 mycluster-control-plane-1 Ready master 145m v1.22.1 mycluster-control-plane-2 Ready master 145m v1.22.1 mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.22.1 mycluster-rhel7-1 Ready worker 98m v1.22.1 mycluster-rhel7-2 Ready worker 98m v1.22.1 mycluster-rhel7-3 Ready worker 98m v1.22.1", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel7-0.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1", "oc get node", "NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.22.1 mycluster-control-plane-1 Ready master 145m v1.22.1 mycluster-control-plane-2 Ready master 145m v1.22.1 mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.22.1 mycluster-rhel7-1 Ready worker 98m v1.22.1 mycluster-rhel7-2 Ready worker 98m v1.22.1 mycluster-rhel7-3 Ready worker 98m v1.22.1", "yum update", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "export OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1", "oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature", "oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1", "oc create -f <filename>.yaml", "oc create -f update-service-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group spec: targetNamespaces: - openshift-update-service", "oc -n openshift-update-service create -f <filename>.yaml", "oc -n openshift-update-service create -f update-service-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"", "oc create -f <filename>.yaml", "oc -n openshift-update-service create -f update-service-subscription.yaml", "oc -n openshift-update-service get clusterserviceversions", "NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded", "FROM registry.access.redhat.com/ubi8/ubi:8.1 RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD [\"/bin/bash\", \"-c\" ,\"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data\"]", "podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest", "podman push registry.example.com/openshift/graph-data:latest", "NAMESPACE=openshift-update-service", "NAME=service", "RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images", "GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest", "oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF", "while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done", "while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done", "NAMESPACE=openshift-update-service", "NAME=service", "POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"", "PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"", "oc patch clusterversion version -p USDPATCH --type merge", "oc get machinehealthcheck -n openshift-machine-api", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-", "oc adm upgrade --allow-explicit-upgrade --to-image USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}<sha256_sum_value> 1", "skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi8/ubi-minimal\" mirror-by-digest-only = true [[registry.mirror]] location = \"example.io/example/ubi-minimal\" [[registry.mirror]] location = \"example.com/example/ubi-minimal\" [[registry]] prefix = \"\" location = \"registry.example.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/image\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com/redhat\"", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6", "oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry", "oc apply -f imageContentSourcePolicy.yaml", "oc get ImageContentSourcePolicy -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}", "oc get updateservice -n openshift-update-service", "NAME AGE service 6s", "oc delete updateservice service -n openshift-update-service", "updateservice.updateservice.operator.openshift.io \"service\" deleted", "oc project openshift-update-service", "Now using project \"openshift-update-service\" on server \"https://example.com:6443\".", "oc get operatorgroup", "NAME AGE openshift-update-service-fprx2 4m41s", "oc delete operatorgroup openshift-update-service-fprx2", "operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted", "oc get subscription", "NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1", "oc get subscription update-service-operator -o yaml | grep \" currentCSV\"", "currentCSV: update-service-operator.v0.0.1", "oc delete subscription update-service-operator", "subscription.operators.coreos.com \"update-service-operator\" deleted", "oc delete clusterserviceversion update-service-operator.v0.0.1", "clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted", "oc get nodes -l node-role.kubernetes.io/master", "NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.22.1 control-plane-node-1 Ready master 75m v1.22.1 control-plane-node-2 Ready master 75m v1.22.1", "oc adm cordon <control_plane_node>", "oc wait --for=condition=Ready node/<control_plane_node>", "oc adm uncordon <control_plane_node>", "oc get nodes -l node-role.kubernetes.io/worker", "NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.22.1 compute-node-1 Ready worker 30m v1.22.1 compute-node-2 Ready worker 30m v1.22.1", "oc adm cordon <compute_node>", "oc adm drain <compute_node> [--pod-selector=<pod_selector>]", "oc wait --for=condition=Ready node/<compute_node>", "oc adm uncordon <compute_node>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/updating_clusters/index
Chapter 9. NUMA
Chapter 9. NUMA 9.1. Introduction Historically, all memory on x86 systems is equally accessible by all CPUs. Known as Uniform Memory Access (UMA), access times are the same no matter which CPU performs the operation. This behavior is no longer the case with recent x86 processors. In Non-Uniform Memory Access (NUMA), system memory is divided into zones (called nodes ), which are allocated to particular CPUs or sockets. Access to memory that is local to a CPU is faster than memory connected to remote CPUs on that system. This chapter describes memory allocation and NUMA tuning configurations in virtualized environments.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-Virtualization_Tuning_Optimization_Guide-NUMA
Chapter 6. Getting Started with Fuse on OpenShift
Chapter 6. Getting Started with Fuse on OpenShift Fuse on OpenShift (the name for Fuse Integration Services since 7.0) enables you to deploy Fuse applications on OpenShift Container Platform. Important For Fuse Integration projects (Fuse on OpenShift projects), Fuse Tooling requires installation of the Red Hat Container Development Kit (CDK) v3. x . See the Getting Started Guide for instructions. In addition to the prerequisites specified in this guide, you need to establish a Red Hat account if you do not have one. Your Red Hat user name and password are required to start the virtual OpenShift instance provided in the Red Hat Container Development Kit. You can easily get an account by registering on the Red Hat Customer Portal . Click in the upper right corner of the white banner, and then click on the Login to Your Red Hat Account page. Fuse Tooling enables you to develop and deploy Fuse Integration projects using the s2i binary workflow. In this workflow, the tooling builds your project locally, assembles it into an image stream, then pushes the image stream to OpenShift, where it is used to build the Docker container. Once the Docker container is built, OpenShift deploys it in a pod. Important Fuse Tooling works only with the S2I binary workflow and only with projects based on the Spring Boot framework. Note Although Fuse Tooling can deploy Fuse Integration projects created using the tooling to remote OpenShift servers, this chapter describes creating and deploying Fuse Integration projects to a virtual OpenShift instance, installed locally using the Red Hat Container Development Kit (CDK) v3. x . The following sections describe how to create and deploy your first Fuse Integration project: Section 6.1, "Adding the Red Hat Container Development Kit server" Section 6.2, "Starting the Container Development Environment (CDE) and virtual OpenShift server" Section 6.3, "Creating a new OpenShift project" Section 6.4, "Creating a new Fuse Integration project" Section 6.5, "Deploying the Fuse Integration project to OpenShift" Note You can also run a Fuse Integration project as a local Camel context, see Section 5.1, "Running routes as a local Camel context" , and then connect to it in the JMX Navigator view, where you can monitor and test the routing context. You can also run the Camel debugger on a Fuse Integration project ( Part II, "Debugging Routing Contexts" ) to expose and fix any logic errors in the routing context. 6.1. Adding the Red Hat Container Development Kit server To add the Red Hat Container Development Kit to the Servers view: If necessary, switch to the Fuse Integration perspective by selecting Window Perspective Open Perspective Fuse Integration . Note If a view that is described in this procedure is not open, you can open it by selecting Window Show View Other and then select the name of the view that you want to open. In the Servers view, click the link No servers are available. Click this link to create a new server... to open the Define a New Server wizard. This link appears only when the Servers view contains no server entry. Otherwise, right-click in the Servers view to open the context menu, and then select New Server to open the Define a New Server wizard. Select Red Hat JBoss Middleware Red Hat Container Development Kit 3.2+ . Accept the defaults for: Server's hostname - localhost Server name - Container Development Environment Click to open the Red Hat Container Development Environment page. to MiniShift Binary , click Browse , navigate to the location where you installed the Red Hat Container Development Kit 3. x and then click Open . to Username , click Add to open the Add a Credential page. Set the credentials this way: Username - Enter the name you use to log into your Red Hat account. Always prompt for password - Leave as is (disabled). Password - Enter the password you use to log into your Red Hat account. Click OK to return to the Red Hat Container Development Environment page, which is now populated. For example: Click Finish . Container Development Environment 3.2+ [Stopped, Synchronized] appears in the Servers view. Container Development Environment 3.2+ is the default server name when you add a CDK 3. x server. 6.2. Starting the Container Development Environment (CDE) and virtual OpenShift server Starting the Container Development Environment (CDE) also starts the virtual OpenShift server. Stopping the CDE also stops the virtual OpenShift server. In the Servers view, select Container Development Environment 3 [stopped, Synchronized] , and then click on the Servers menu bar. Console view opens and displays the status of the startup process: Note On initial startup, the CDE asks whether you accept the untrusted SSL certificate. Click Yes . When the startup process has finished, the Servers view displays: Switch to the OpenShift Explorer view. The virtual OpenShift server instance, developer , is also running: https://192.168.99.100:8443 is an example of a URL for the OpenShift developer web console. Your installation displays the URL for your instance. For more details, see Section 6.6, "Accessing the OpenShift web console" . 6.3. Creating a new OpenShift project When you deploy your Fuse Integration project to OpenShift, it is published to the OpenShift project you create here. In the OpenShift Explorer view, right-click the developer entry, to open the context menu. Select New Project to open the New OpenShift Project wizard. Set the new project's properties this way: In the Project Name field, enter the name for the project's namespace on the virtual OpenShift server. Only lower case letters, numbers, and dashes are valid. In the Display Name field, enter the name to display on the virtual OpenShift web console's Overview page. Leave the Description field as is. For example: Click Finish . The new OpenShift project (in this example, New FIS Test newtest ) appears in the OpenShift Explorer tab, under, in this example, developer https://192.168.99.100:8443 : Note MyProject myproject is an initial example project included with OpenShift. With New FIS Test newtest selected in the OpenShift Explorer view, the Properties view displays the project's details. For example: Note When you deploy the project to OpenShift, the Properties view gathers and displays the same information about the project that the OpenShift web console does. 6.4. Creating a new Fuse Integration project Before you create a new Fuse Integration project, you should enable staging repositories. This is needed because some Maven artifacts are not in default Maven repositories. To enable staging repositories, select Window Preferences Fuse Tooling Staging Repositories . To create a Fuse Integration project, use the Spring Boot on OpenShift template: In the Project Explorer view, right-click to open the context menu and then select New Fuse Integration Project to open the wizard's Choose a project name page: In the Project Name field, type a name that is unique to the workspace you are using, for example, myFISproject . Accept the defaults for the other options. Click to open the Select a Target Runtime page: Leave the defaults for Target Runtime ( No Runtime selected ) and Camel Version . Click to open the Advanced Project Setup page: Select the Simple log using Spring Boot - Spring DSL template. Click Finish . Note Because of the number of dependencies that are downloaded for a first-time Fuse Integration project, building it can take some time. If the Fuse Integration perspective is not already open, Developer Studio prompts you to indicate whether you want to open it now. Click Yes . When the build is done the Fuse Integration perspective displays the project, for example: At this point, you can: Deploy the project on OpenShift Section 5.1, "Running routes as a local Camel context" to verify that the routing context runs successfully on your local machine Connecting to the running context in the JMX Navigator view (see the section called "Viewing processes in a local JMX server" ), you can monitor route components and test whether the route performs as expected: View a route component's JMX statistics - see Chapter 20, Viewing a component's JMX statistics . Edit the running route - see Chapter 24, Managing routing endpoints . Suspend/resume the running route - see Chapter 26, Managing routing contexts Start/stop tracing on the running route - see Chapter 22, Tracing Routes Run the Camel debugger on the project's camel-context.xml file to discover and fix logic errors - see Part II, "Debugging Routing Contexts" 6.5. Deploying the Fuse Integration project to OpenShift In the Project Explorer view, right-click the project's root (in this example, myFISproject ) to open the context menu. Select Run As Run Configurations to open the Run Configurations wizard. In the sidebar menu, select Maven Build Deploy <projectname> on OpenShift (in this example, Deploy myFISproject on OpenShift ) to open the project's default run configuration: Leave the default settings as they are on the Main tab. Open the JRE tab to access the VM arguments: In the VM arguments pane, change the value of the -Dkubernetes.namespace=test argument to match the project name that you used for the OpenShift project when you created it ( OpenShift project name in Section 6.3, "Creating a new OpenShift project" . In this example, change the default value test to newtest : Depending on your OpenShift configuration, you may need to modify other`VM arguments to support it: -Dkubernetes.master=https://192.168.99.1:8443 When running multiple OpenShift instances or using a remote instance, you need to specify the URL of the OpenShift instance targeted for the deployment. The URL above is an example. -Dkubernetes.trust.certificates=true When using the CDK, this argument is required. Leave it set to true . If you are using an OpenShift instance that has a valid SSL certificate, change the value of this argument to false . Click Apply and then click Run . Because of the number of dependencies to download, first-time deployment can take some time. The speed of your computer and your internet connection are contributing factors. Typically, it takes 25 to 35 minutes to complete a first-time deployment. In the Console view, you can track the progress of the deploy process. In the following output, the entry *Pushing image 172.30.1 ... .. * indicates that the project built successfully and the application images are being pushed to OpenShift, where they will be used to build the Docker container. The Console view displays BUILD SUCCESS when deployment completes successfully: Switch to the OpenShift Explorer view and select New FIS Test newtest : In the Properties view, the Details page displays all of the project's property values. Open the other tabs ( Builds , Build Configs , Deployments ,... ) to view other properties of the project. The Properties view provides the same information as the OpenShift Web Console. In the OpenShift Explorer view, select camel-ose-springboot-xml to view its details in the Properties view: Scroll through the other tabs to view other properties of the deployment configuration. In the OpenShift Explorer view, select camel-ose-springboot-xml-1-mdmtd Pod Running , and then view the details of the running instance in the Properties view: In the OpenShift Explorer view, right-click camel-ose-springboot-xml-1-mdmtd Pod Running , and then select Pod Logs... . Note If prompted, enter the path to the installed oc executable. It is required to retrieve pod logs. The Console view automatically opens, displaying the logs from the running pod: Click in the Console view's menu bar to terminate the session and clear console output. 6.6. Accessing the OpenShift web console Note This information applies to Red Hat Container Development Kit installations only. To access the OpenShift web console, open a browser and enter the OpenShift server's URL, which is specific to your instance and your machine. For example, enter https://192.168.99.100:8443 , in the browser's address field. You can log into the web console either as a developer or as an administrator, using the default credentials: Default developer role Developer users can view only their own projects and the supplied OpenShift sample project, which demonstrates OpenShift v3 features. Developer users can create, edit and delete any project that they own that is deployed on OpenShift. Username - developer Password - developer Default administrator role An administrator user can view and access all projects on OpenShift (CDK). Administrator users can create, edit and delete, any project deployed on OpenShift. Username - admin Password - admin For more information on using the OpenShift web console, see Getting Started Guide .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/RiderFisTools
Chapter 3. Creating and executing DMN and BPMN models using Maven
Chapter 3. Creating and executing DMN and BPMN models using Maven You can use Maven archetypes to develop DMN and BPMN models in VS Code using the Red Hat Decision Manager VS Code extension instead of Business Central. You can then integrate your archetypes with your Red Hat Decision Manager decision and process services in Business Central as needed. This method of developing DMN and BPMN models is helpful for building new business applications using the Red Hat Decision Manager VS Code extension. Procedure In a command terminal, navigate to a local folder where you want to store the new Red Hat Decision Manager project. Enter the following command to use a Maven archtype to generate a project within a defined folder: Generating a project using Maven archetype This command generates a Maven project with required dependencies and generates required directories and files to build your business application. You can use the Git version control system (recommended) when developing a project. If you want to generate multiple projects in the same directory, specify the artifactId and groupId of the generated business application by adding -DgroupId=<groupid> -DartifactId=<artifactId> to the command. In your VS Code IDE, click File , select Open Folder , and navigate to the folder that is generated using the command. Before creating the first asset, set a package for your business application, for example, org.kie.businessapp , and create respective directories in the following paths: PROJECT_HOME/src/main/java PROJECT_HOME/src/main/resources PROJECT_HOME/src/test/resources For example, you can create PROJECT_HOME/src/main/java/org/kie/businessapp for org.kie.businessapp package. Use VS Code to create assets for your business application. You can create the assets supported by Red Hat Decision Manager VS Code extension using the following ways: To create a business process, create a new file with .bpmn or .bpmn2 in PROJECT_HOME/src/main/resources/org/kie/businessapp directory, such as Process.bpmn . To create a DMN model, create a new file with .dmn in PROJECT_HOME/src/main/resources/org/kie/businessapp directory, such as AgeDecision.dmn . To create a test scenario simulation model, create a new file with .scesim in PROJECT_HOME/src/test/resources/org/kie/businessapp directory, such as TestAgeScenario.scesim . After you create the assets in your Maven archetype, navigate to the root directory (contains pom.xml ) of the project in the command line and run the following command to build the knowledge JAR (KJAR) of your project: If the build fails, address any problems described in the command line error messages and try again to validate the project until the build is successful. However, if the build is successful, you can find the artifact of your business application in PROJECT_HOME/target directory. Note Use mvn clean install command often to validate your project after each major change during development. You can deploy the generated knowledge JAR (KJAR) of your business application on a running KIE Server using the REST API. For more information about using REST API, see Interacting with Red Hat Decision Manager using KIE APIs .
[ "mvn archetype:generate -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024", "mvn clean install" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/proc-dmn-bpmn-maven-create_dmn-models
Managing performance and resource use
Managing performance and resource use Red Hat OpenShift Pipelines 1.18 Managing resource consumption in OpenShift Pipelines Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/managing_performance_and_resource_use/index
2.11.4. Finding Control Groups
2.11.4. Finding Control Groups To list the cgroups on a system, run: You can restrict the output to a specific hierarchy by specifying a controller and path in the format controller : path . For example: lists only subgroups of the group1 cgroup in the hierarchy to which the cpuset subsystem is attached.
[ "~]USD lscgroup", "~]USD lscgroup cpuset:group1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/finding_control_groups
1.3. LVM Architecture Overview
1.3. LVM Architecture Overview Note LVM2 is backwards compatible with LVM1, with the exception of snapshot and cluster support. You can convert a volume group from LVM1 format to LVM2 format with the vgconvert command. For information on converting LVM metadata format, see the vgconvert (8) man page. The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. This device is initialized as an LVM physical volume (PV). To create an LVM logical volume, the physical volumes are combined into a volume group (VG). This creates a pool of disk space out of which LVM logical volumes (LVs) can be allocated. This process is analogous to the way in which disks are divided into partitions. A logical volume is used by file systems and applications (such as databases). Figure 1.1, "LVM Logical Volume Components" shows the components of a simple LVM logical volume: Figure 1.1. LVM Logical Volume Components For detailed information on the components of an LVM logical volume, see Chapter 2, LVM Components .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_definition
Chapter 9. Writing a Camel application that uses transactions
Chapter 9. Writing a Camel application that uses transactions After you configure three, available-to-be-referenced, types of services, you are ready to write an application. The three types of services are: One transaction manager that is an implementation of one of the following interfaces: javax.transaction.UserTransaction javax.transaction.TransactionManager org.springframework.transaction.PlatformTransactionManager At least one JDBC data source that implements the javax.sql.DataSource . interface. Often, there is more than one data source. At least one JMS connection factory that implements the javax.jms.ConnectionFactory interface. Often, there is more than one. This section describes a Camel-specific configuration related to management of transactions, data sources, and connection factories. Note This section describes several Spring-related concepts such as SpringTransactionPolicy . There is a clear distinction between Spring XML DSL and Blueprint XML DSL , which are both XML languages that define Camel contexts. Spring XML DSL is now deprecated in Fuse. However, the Camel transaction mechanisms still uses the Spring library internally. Most of the information here is not dependent on the kind of PlatformTransactionManager that is used. If the PlatformTransactionManager is the Narayana transaction manager, then full JTA transactions are used. If PlatformTransactionManager is defined as a local Blueprint <bean> , for example, org.springframework.jms.connection.JmsTransactionManager , then local transactions are used. Transaction demarcation refers to the procedures for starting, committing, and rolling back transactions. This section describes the mechanisms that are available for controlling transaction demarcation, both by programming and by configuration. Section 9.1, "Transaction demarcation by marking the route" Section 9.2, "Demarcation by transactional endpoints" Section 9.3, "Demarcation by declarative transactions" Section 9.4, "Transaction propagation policies" Section 9.5, "Error handling and rollbacks" 9.1. Transaction demarcation by marking the route Apache Camel provides a simple mechanism for initiating a transaction in a route. Insert the transacted() command in the Java DSL or insert the <transacted/> tag in the XML DSL. Figure 9.1. Demarcation by Marking the Route The transacted processor demarcates transactions as follows: When an exchange enters the transacted processor, the transacted processor invokes the default transaction manager to begin a transaction and attaches the transaction to the current thread. When the exchange reaches the end of the remaining route, the transacted processor invokes the transaction manager to commit the current transaction. 9.1.1. Sample route with JDBC resource Figure 9.1, "Demarcation by Marking the Route" shows an example of a route that is made transactional by adding the transacted() DSL command to the route. All of the route nodes that follow the transacted() node are included in the transaction scope. In this example, the two following nodes access a JDBC resource. 9.1.2. Route definition in Java DSL The following Java DSL example shows how to define a transactional route by marking the route with the transacted() DSL command: import org.apache.camel.builder.RouteBuilder; class MyRouteBuilder extends RouteBuilder { public void configure() { from("file:src/data?noop=true") .transacted() .bean("accountService","credit") .bean("accountService","debit"); } } In this example, the file endpoint reads some XML format files that describe a transfer of funds from one account to another. The first bean() invocation credits the specified sum of money to the beneficiary's account and then the second bean() invocation subtracts the specified sum of money from the sender's account. Both of the bean() invocations cause updates to be made to a database resource. It is assumed that the database resource is bound to the transaction through the transaction manager, for example, see Chapter 6, Using JDBC data sources . 9.1.3. Route definition in Blueprint XML The preceding route can also be expressed in Blueprint XML. The <transacted /> tag marks the route as transactional, as shown in the following XML: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ...> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="file:src/data?noop=true" /> <transacted /> <bean ref="accountService" method="credit" /> <bean ref="accountService" method="debit" /> </route> </camelContext> </blueprint> 9.1.4. Default transaction manager and transacted policy To demarcate transactions, the transacted processor must be associated with a particular transaction manager instance. To save you having to specify the transaction manager every time you invoke transacted() , the transacted processor automatically picks a sensible default. For example, if there is only one instance of a transaction manager in your configuration, the transacted processor implicitly picks this transaction manager and uses it to demarcate transactions. A transacted processor can also be configured with a transacted policy, of TransactedPolicy type, which encapsulates a propagation policy and a transaction manager (see Section 9.4, "Transaction propagation policies" for details). The following rules are used to pick the default transaction manager or transaction policy: If there is only one bean of org.apache.camel.spi.TransactedPolicy type, use this bean. Note The TransactedPolicy type is a base type of the SpringTransactionPolicy type that is described in Section 9.4, "Transaction propagation policies" . Hence, the bean referred to here could be a SpringTransactionPolicy bean. If there is a bean of type, org.apache.camel.spi.TransactedPolicy , which has the ID , PROPAGATION_REQUIRED , use this bean. If there is only one bean of org.springframework.transaction.PlatformTransactionManager type, use this bean. You also have the option of specifying a bean explicitly by providing the bean ID as an argument to transacted() . See Section 9.4.4, "Sample route with PROPAGATION_NEVER policy in Java DSL" . 9.1.5. Transaction scope If you insert a transacted processor into a route, the transaction manager creates a new transaction each time an exchange passes through this node. The transaction's scope is defined as follows: The transaction is associated with only the current thread. The transaction scope encompasses all of the route nodes that follow the transacted processor. Any route nodes that precede the transacted processor are not in the transaction. However, if the route begins with a transactional endpoint then all nodes in the route are in the transaction. See Section 9.2.5, "Transactional endpoints at start of route" . Consider the following route. It is incorrect because the transacted() DSL command mistakenly appears after the first bean() call, which accesses the database resource: // Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { ... public void configure() { from("file:src/data?noop=true") .bean("accountService", "credit") .transacted() // <-- WARNING: Transaction started in the wrong place! .bean("accountService", "debit"); } } 9.1.6. No thread pools in a transactional route It is crucial to understand that a given transaction is associated with only the current thread. You must not create a thread pool in the middle of a transactional route because the processing in the new threads will not participate in the current transaction. For example, the following route is bound to cause problems: // Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { ... public void configure() { from("file:src/data?noop=true") .transacted() .threads(3) // WARNING: Subthreads are not in transaction scope! .bean("accountService", "credit") .bean("accountService", "debit"); } } A route such as the preceding one is certain to corrupt your database because the threads() DSL command is incompatible with transacted routes. Even if the threads() call precedes the transacted() call, the route will not behave as expected. 9.1.7. Breaking a route into fragments If you want to break a route into fragments and have each route fragment participate in the current transaction, you can use direct: endpoints. For example, to send exchanges to separate route fragments, depending on whether the transfer amount is big (greater than 100) or small (less than or equal to 100), you can use the choice() DSL command and direct endpoints, as follows: // Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { ... public void configure() { from("file:src/data?noop=true") .transacted() .bean("accountService", "credit") .choice().when(xpath("/transaction/transfer[amount > 100]")) .to("direct:txbig") .otherwise() .to("direct:txsmall"); from("direct:txbig") .bean("accountService", "debit") .bean("accountService", "dumpTable") .to("file:target/messages/big"); from("direct:txsmall") .bean("accountService", "debit") .bean("accountService", "dumpTable") .to("file:target/messages/small"); } } Both the fragment beginning with direct:txbig and the fragment beginning with direct:txsmall participate in the current transaction because the direct endpoints are synchronous. This means that the fragments execute in the same thread as the first route fragment and, therefore, they are included in the same transaction scope. Note You must not use seda endpoints to join the route fragments. seda consumer endpoints create a new thread (or threads) to execute the route fragment (asynchronous processing). Hence, the fragments would not participate in the original transaction. 9.1.8. Resource endpoints The following Apache Camel components act as resource endpoints when they appear as the destination of a route, for example, if they appear in the to() DSL command. That is, these endpoints can access a transactional resource, such as a database or a persistent queue. The resource endpoints can participate in the current transaction, as long as they are associated with the same transaction manager as the transacted processor that initiated the current transaction. ActiveMQ AMQP Hibernate iBatis JavaSpace JBI JCR JDBC JMS JPA LDAP 9.1.9. Sample route with resource endpoints The following example shows a route with resource endpoints. This sends the order for a money transfer to two different JMS queues. The credits queue processes the order to credit the receiver's account. The debits queue processes the order to debit the sender's account. There should be a credit only if there is a corresponding debit. Consequently, you want to enclose the enqueueing operations in a single transaction. If the transaction succeeds, both the credit order and the debit order will be enqueued. If an error occurs, neither order will be enqueued. from("file:src/data?noop=true") .transacted() .to("jmstx:queue:credits") .to("jmstx:queue:debits"); 9.2. Demarcation by transactional endpoints If a consumer endpoint at the start of a route accesses a resource, the transacted() command is of no use, because it initiates the transaction after an exchange is polled. In other words, the transaction starts too late to include the consumer endpoint within the transaction scope. In this case, the correct approach is to make the endpoint itself responsible for initiating the transaction. An endpoint that is capable of managing transactions is known as a transactional endpoint . There are two different models of demarcation by transactional endpoint, as follows: General case - normally, a transactional endpoint demarcates transactions as follows: When an exchange arrives at the endpoint, or when the endpoint successfully polls for an exchange, the endpoint invokes its associated transaction manager to begin a transaction. The endpoint attaches the new transaction to the current thread. When the exchange reaches the end of the route, the transactional endpoint invokes the transaction manager to commit the current transaction. JMS endpoint with an InOut exchange - when a JMS consumer endpoint receives an InOut exchange and this exchange is routed to another JMS endpoint, this must be treated as a special case. The problem is that the route can deadlock, if you try to enclose the entire request/reply exchange in a single transaction. 9.2.1. Sample route with a JMS endpoint Section 9.2, "Demarcation by transactional endpoints" shows an example of a route that is made transactional by the presence of a transactional endpoint at the start of the route (in the from() command). All of the route nodes are included in the transaction scope. In this example, all of the endpoints in the route access a JMS resource. 9.2.2. Route definition in Java DSL The following Java DSL example shows how to define a transactional route by starting the route with a transactional endpoint: from("jmstx:queue:giro") .to("jmstx:queue:credits") .to("jmstx:queue:debits"); In the example, the transaction scope encompasses the endpoints, jmstx:queue:giro , jmstx:queue:credits , and jmstx:queue:debits . If the transaction succeeds, the exchange is permanently removed from the giro queue and pushed on to the credits queue and the debits queue. If the transaction fails, the exchange does not get put on to the credits and debits queues and the exchange is pushed back on to the giro queue. By default, JMS automatically attempts to redeliver the message. The JMS component bean, jmstx , must be explicitly configured to use transactions, as follows: <blueprint ...> <bean id="jmstx" class="org.apache.camel.component.jms.JmsComponent"> <property name="configuration" ref="jmsConfig" /> </bean> <bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration"> <property name="connectionFactory" ref="jmsConnectionFactory" /> <property name="transactionManager" ref="jmsTransactionManager" /> <property name="transacted" value="true" /> </bean> ... </blueprint> In the example, the transaction manager instance, jmsTransactionManager , is associated with the JMS component and the transacted property is set to true to enable transaction demarcation for InOnly exchanges. 9.2.3. Route definition in Blueprint XML The preceding route can equivalently be expressed in Blueprint XML, as follows: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="jmstx:queue:giro" /> <to uri="jmstx:queue:credits" /> <to uri="jmstx:queue:debits" /> </route> </camelContext> </blueprint> 9.2.4. DSL transacted() command not required The transacted() DSL command is not required in a route that starts with a transactional endpoint. Nevertheless, assuming that the default transaction policy is PROPAGATION_REQUIRED (see Section 9.4, "Transaction propagation policies" ), it is usually harmless to include the transacted() command, as in this example: from("jmstx:queue:giro") .transacted() .to("jmstx:queue:credits") .to("jmstx:queue:debits"); However, it is possible for this route to behave in unexpected ways, for example, if a single TransactedPolicy bean having a non-default propagation policy is created in Blueprint XML. See Section 9.1.4, "Default transaction manager and transacted policy" . Consequently, it is usually better not to include the transacted() DSL command in routes that start with a transactional endpoint. 9.2.5. Transactional endpoints at start of route The following Apache Camel components act as transactional endpoints when they appear at the start of a route (for example, if they appear in the from() DSL command). That is, these endpoints can be configured to behave as a transactional client and they can also access a transactional resource. ActiveMQ AMQP JavaSpace JMS JPA 9.3. Demarcation by declarative transactions When using Blueprint XML, you can also demarcate transactions by declaring transaction policies in your Blueprint XML file. By applying the appropriate transaction policy to a bean or bean method, for example, the Required policy, you can ensure that a transaction is started whenever that particular bean or bean method is invoked. At the end of the bean method, the transaction is committed. This approach is analogous to the way that transactions are dealt with in Enterprise Java Beans. OSGi declarative transactions enable you to define transaction policies at the following scopes in your Blueprint file: Section 9.3.1, "Bean-level declaration" Section 9.3.2, "Top-level declaration" See also: Section 9.3.3, "Description of tx:transaction attributes" . 9.3.1. Bean-level declaration To declare transaction policies at the bean level, insert a tx:transaction element as a child of the bean element, as follows: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:tx="http://aries.apache.org/xmlns/transactions/v1.1.0"> <bean id="accountFoo" class="org.jboss.fuse.example.Account"> <tx:transaction method="*" value="Required" /> <property name="accountName" value="Foo" /> </bean> <bean id="accountBar" class="org.jboss.fuse.example.Account"> <tx:transaction method="*" value="Required" /> <property name="accountName" value="Bar" /> </bean> </blueprint> In the preceding example, the required transaction policy is applied to all methods of the accountFoo bean and the accountBar bean, where the method attribute specifies the wildcard, * , to match all bean methods. 9.3.2. Top-level declaration To declare transaction policies at the top level, insert a tx:transaction element as a child of the blueprint element, as follows: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:tx="http://aries.apache.org/xmlns/transactions/v1.1.0"> <tx:transaction bean="account*" value="Required" /> <bean id="accountFoo" class="org.jboss.fuse.example.Account"> <property name="accountName" value="Foo" /> </bean> <bean id="accountBar" class="org.jboss.fuse.example.Account"> <property name="accountName" value="Bar" /> </bean> </blueprint> In the preceding example, the Required transaction policy is applied to all methods of every bean whose ID matches the pattern, account* . 9.3.3. Description of tx:transaction attributes The tx:transaction element supports the following attributes: bean (Top-level only) Specifies a list of bean IDs (comma or space separated) to which the transaction policy applies. For example: <blueprint ...> <tx:transaction bean="accountFoo,accountBar" value="..." /> </blueprint> You can also use the wildcard character, * , which may appear at most once in each list entry. For example: <blueprint ...> <tx:transaction bean="account*,jms*" value="..." /> </blueprint> If the bean attribute is omitted, it defaults to * (matching all non-synthetic beans in the blueprint file). method (Top-level and bean-level) Specifies a list of method names (comma or space separated) to which the transaction policy applies. For example: <bean id="accountFoo" class="org.jboss.fuse.example.Account"> <tx:transaction method="debit,credit,transfer" value="Required" /> <property name="accountName" value="Foo" /> </bean> You can also use the wildcard character, * , which may appear at most once in each list entry. If the method attribute is omitted, it defaults to * (matching all methods in the applicable beans). value (Top-level and bean-level) Specifies the transaction policy. The policy values have the same semantics as the policies defined in the EJB 3.0 specification, as follows: Required - support a current transaction; create a new one if none exists. Mandatory - support a current transaction; throw an exception if no current transaction exists. RequiresNew - create a new transaction, suspending the current transaction if one exists. Supports - support a current transaction; execute non-transactionally if none exists. NotSupported - do not support a current transaction; rather always execute non-transactionally. Never - do not support a current transaction; throw an exception if a current transaction exists. 9.4. Transaction propagation policies If you want to influence the way a transactional client creates new transactions, you can use JmsTransactionManager and specify a transaction policy for it. In particular, Spring transaction policies enable you to specify a propagation behavior for your transaction. For example, if a transactional client is about to create a new transaction and it detects that a transaction is already associated with the current thread, should it go ahead and create a new transaction, suspending the old one? Or should it let the existing transaction take over? These kinds of behavior are regulated by specifying the propagation behavior on a transaction policy. Transaction policies are instantiated as beans in Blueprint XML. You can then reference a transaction policy by providing its bean ID as an argument to the transacted() DSL command. For example, if you want to initiate transactions subject to the behavior, PROPAGATION_REQUIRES_NEW , you could use the following route: from("file:src/data?noop=true") .transacted("PROPAGATION_REQUIRES_NEW") .bean("accountService","credit") .bean("accountService","debit") .to("file:target/messages"); Where the PROPAGATION_REQUIRES_NEW argument specifies the bean ID of a transaction policy bean that is configured with the PROPAGATION_REQUIRES_NEW behavior. See Section 9.4.3, "Defining policy beans in Blueprint XML" . 9.4.1. About Spring transaction policies Apache Camel lets you define Spring transaction policies using the org.apache.camel.spring.spi.SpringTransactionPolicy class, which is essentially a wrapper around a native Spring class. The SpringTransactionPolicy class encapsulates two pieces of data: A reference to a transaction manager of PlatformTransactionManager type A propagation behavior For example, you could instantiate a Spring transaction policy bean with PROPAGATION_MANDATORY behavior, as follows: <blueprint ...> <bean id="PROPAGATION_MANDATORY "class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager" /> <property name="propagationBehaviorName" value="PROPAGATION_MANDATORY" /> </bean> ... </blueprint> 9.4.2. Descriptions of propagation behaviors The following propagation behaviors are supported by Spring. These values were originally modeled on the propagation behaviors supported by JavaeEE: PROPAGATION_MANDATORY Support a current transaction. Throw an exception if no current transaction exists. PROPAGATION_NESTED Execute within a nested transaction if a current transaction exists, else behave like PROPAGATION_REQUIRED . Note Nested transactions are not supported by all transaction managers. PROPAGATION_NEVER Do not support a current transaction. Throw an exception if a current transaction exists. PROPAGATION_NOT_SUPPORTED Do not support a current transaction. Always execute non-transactionally. Note This policy requires the current transaction to be suspended, a feature which is not supported by all transaction managers. PROPAGATION_REQUIRED (Default) Support a current transaction. Create a new one if none exists. PROPAGATION_REQUIRES_NEW Create a new transaction, suspending the current transaction if one exists. Note Suspending transactions is not supported by all transaction managers. PROPAGATION_SUPPORTS Support a current transaction. Execute non-transactionally if none exists. 9.4.3. Defining policy beans in Blueprint XML The following example shows how to define transaction policy beans for all of the supported propagation behaviors. For convenience, each of the bean IDs matches the specified value of the propagation behavior value, but in practice you can use whatever value you like for the bean IDs. <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <bean id="PROPAGATION_MANDATORY " class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager" /> <property name="propagationBehaviorName" value="PROPAGATION_MANDATORY" /> </bean> <bean id="PROPAGATION_NESTED" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager" /> <property name="propagationBehaviorName" value="PROPAGATION_NESTED" /> </bean> <bean id="PROPAGATION_NEVER" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager" /> <property name="propagationBehaviorName" value="PROPAGATION_NEVER" /> </bean> <bean id="PROPAGATION_NOT_SUPPORTED" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager" /> <property name="propagationBehaviorName" value="PROPAGATION_NOT_SUPPORTED" /> </bean> <!-- This is the default behavior. --> <bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager" /> </bean> <bean id="PROPAGATION_REQUIRES_NEW" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager" /> <property name="propagationBehaviorName" value="PROPAGATION_REQUIRES_NEW" /> </bean> <bean id="PROPAGATION_SUPPORTS" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager" /> <property name="propagationBehaviorName" value="PROPAGATION_SUPPORTS" /> </bean> </blueprint> Note If you want to paste any of these bean definitions into your own Blueprint XML configuration, remember to customize the references to the transaction manager. That is, replace references to txManager with the actual ID of your transaction manager bean. 9.4.4. Sample route with PROPAGATION_NEVER policy in Java DSL A simple way of demonstrating that transaction policies have some effect on a transaction is to insert a PROPAGATION_NEVER policy into the middle of an existing transaction, as shown in the following route: from("file:src/data?noop=true") .transacted() .bean("accountService","credit") .transacted("PROPAGATION_NEVER") .bean("accountService","debit"); Used in this way, the PROPAGATION_NEVER policy inevitably aborts every transaction, leading to a transaction rollback. You should easily be able to see the effect of this on your application. Note Remember that the string value passed to transacted() is a bean ID , not a propagation behavior name. In this example, the bean ID is chosen to be the same as a propagation behavior name, but this need not always be the case. For example, if your application uses more than one transaction manager, you might end up with more than one policy bean having a particular propagation behavior. In this case, you could not simply name the beans after the propagation behavior. 9.4.5. Sample route with PROPAGATION_NEVER policy in Blueprint XML The preceding route can be defined in Blueprint XML, as follows: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="file:src/data?noop=true" /> <transacted /> <bean ref="accountService" method="credit" /> <transacted ref="PROPAGATION_NEVER" /> <bean ref="accountService" method="debit" /> </route> </camelContext> </blueprint> 9.5. Error handling and rollbacks While you can use standard Apache Camel error handling techniques in a transactional route, it is important to understand the interaction between exceptions and transaction demarcation. In particular, you need to consider that thrown exceptions usually cause transaction rollback. See the following topics: Section 9.5.1, "How to roll back a transaction" Section 9.5.2, "How to define a dead letter queue" Section 9.5.3, "Catching exceptions around a transaction" 9.5.1. How to roll back a transaction You can use one of the following approaches to roll back a transaction: Section 9.5.1.1, "Using runtime exceptions to trigger rollbacks" Section 9.5.1.2, "Using the rollback() DSL command" Section 9.5.1.3, "Using the markRollbackOnly() DSL command" 9.5.1.1. Using runtime exceptions to trigger rollbacks The most common way to roll back a Spring transaction is to throw a runtime (unchecked) exception. In other words, the exception is an instance or subclass of java.lang.RuntimeException . Java errors, of java.lang.Error type, also trigger transaction rollback. Checked exceptions, on the other hand, do not trigger rollback. The following figure summarizes how Java errors and exceptions affect transactions, where the classes that trigger roll back are shaded gray. Note The Spring framework also provides a system of XML annotations that enable you to specify which exceptions should or should not trigger roll backs. For details, see "Rolling back" in the Spring Reference Guide. Warning If a runtime exception is handled within the transaction, that is, before the exception has the chance to percolate up to the code that does the transaction demarcation, the transaction will not be rolled back. For details, see Section 9.5.2, "How to define a dead letter queue" . 9.5.1.2. Using the rollback() DSL command If you want to trigger a rollback in the middle of a transacted route, you can do this by calling the rollback() DSL command, which throws an org.apache.camel.RollbackExchangeException exception. In other words, the rollback() command uses the standard approach of throwing a runtime exception to trigger the rollback. For example, suppose that you decide that there should be an absolute limit on the size of money transfers in the account services application. You could trigger a rollback when the amount exceeds 100 by using the code in the following example: from("file:src/data?noop=true") .transacted() .bean("accountService","credit") .choice().when(xpath("/transaction/transfer[amount > 100]")) .rollback() .otherwise() .to("direct:txsmall"); from("direct:txsmall") .bean("accountService","debit") .bean("accountService","dumpTable") .to("file:target/messages/small"); Note If you trigger a rollback in the preceding route, it will get trapped in an infinite loop. The reason for this is that the RollbackExchangeException exception thrown by rollback() propagates back to the file endpoint at the start of the route. The File component has a built-in reliability feature that causes it to resend any exchange for which an exception has been thrown. Upon resending, of course, the exchange just triggers another rollback, leading to an infinite loop. The example shows how to avoid this infinite loop. 9.5.1.3. Using the markRollbackOnly() DSL command The markRollbackOnly() DSL command enables you to force the current transaction to roll back, without throwing an exception. This can be useful when throwing an exception has unwanted side effects, such as the example in Section 9.5.1.2, "Using the rollback() DSL command" . The following example shows how to modify the example in Section 9.5.1.2, "Using the rollback() DSL command" by replacing the rollback() command with the markRollbackOnly() command. This version of the route solves the problem of the infinite loop. In this case, when the amount of the money transfer exceeds 100, the current transaction is rolled back, but no exception is thrown. Because the file endpoint does not receive an exception, it does not retry the exchange, and the failed transactions is quietly discarded. The following code rolls back an exception with the markRollbackOnly() command: from("file:src/data?noop=true") .transacted() .bean("accountService","credit") .choice().when(xpath("/transaction/transfer[amount > 100]")) .markRollbackOnly() .otherwise() .to("direct:txsmall"); ... The preceding route implementation is not ideal, however. Although the route cleanly rolls back the transaction (leaving the database in a consistent state) and avoids the pitfall of infinite looping, it does not keep any record of the failed transaction. In a real-world application, you would typically want to keep track of any failed transaction. For example, you might want to write a letter to the relevant customer in order to explain why the transaction did not succeed. A convenient way of tracking failed transactions is to add a dead-letter queue to the route. 9.5.2. How to define a dead letter queue To keep track of failed transactions, you can define an onException() clause, which enables you to divert the relevant exchange object to a dead-letter queue. When used in the context of transactions, however, you need to be careful about how you define the onException() clause, because of potential interactions between exception handling and transaction handling. The following example shows the correct way to define an onException() clause, assuming that you need to suppress the rethrown exception. // Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { ... public void configure() { onException(IllegalArgumentException.class) .maximumRedeliveries(1) .handled(true) .to("file:target/messages?fileName=deadLetters.xml&fileExist=Append") .markRollbackOnly(); // NB: Must come *after* the dead letter endpoint. from("file:src/data?noop=true") .transacted() .bean("accountService","credit") .bean("accountService","debit") .bean("accountService","dumpTable") .to("file:target/messages"); } } In the preceding example, onException() is configured to catch the IllegalArgumentException exception and send the offending exchange to a dead letter file, deadLetters.xml . Of course, you can change this definition to catch whatever kind of exception arises in your application. The exception rethrow behavior and the transaction rollback behavior are controlled by the following special settings in the onException() clause: handled(true) - suppress the rethrown exception. In this particular example, the rethrown exception is undesirable because it triggers an infinite loop when it propagates back to the file endpoint. See Section 9.5.1.3, "Using the markRollbackOnly() DSL command" . In some cases, however, it might be acceptable to rethrow the exception (for example, if the endpoint at the start of the route does not implement a retry feature). markRollbackOnly() - marks the current transaction for rollback without throwing an exception. Note that it is essential to insert this DSL command after the to() command that routes the exchange to the dead letter queue. Otherwise, the exchange would never reach the dead letter queue, because markRollbackOnly() interrupts the chain of processing. 9.5.3. Catching exceptions around a transaction Instead of using onException() , a simple approach to handling exceptions in a transactional route is to use the doTry() and doCatch() clauses around the route. For example, the following code shows how you can catch and handle the IllegalArgumentException in a transactional route, without the risk of getting trapped in an infinite loop. // Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { ... public void configure() { from("file:src/data?noop=true") .doTry() .to("direct:split") .doCatch(IllegalArgumentException.class) .to("file:target/messages?fileName=deadLetters.xml&fileExist=Append") .end(); from("direct:split") .transacted() .bean("accountService","credit") .bean("accountService","debit") .bean("accountService","dumpTable") .to("file:target/messages"); } } In this example, the route is split into two segments. The first segment (from the file:src/data endpoint) receives the incoming exchanges and performs the exception handling using doTry() and doCatch() . The second segment (from the direct:split endpoint) does all of the transactional work. If an exception occurs within this transactional segment, it propagates first of all to the transacted() command, causing the current transaction to be rolled back, and it is then caught by the doCatch() clause in the first route segment. The doCatch() clause does not rethrow the exception, so the file endpoint does not do any retries and infinite looping is avoided.
[ "import org.apache.camel.builder.RouteBuilder; class MyRouteBuilder extends RouteBuilder { public void configure() { from(\"file:src/data?noop=true\") .transacted() .bean(\"accountService\",\"credit\") .bean(\"accountService\",\"debit\"); } }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" ...> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"file:src/data?noop=true\" /> <transacted /> <bean ref=\"accountService\" method=\"credit\" /> <bean ref=\"accountService\" method=\"debit\" /> </route> </camelContext> </blueprint>", "// Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { public void configure() { from(\"file:src/data?noop=true\") .bean(\"accountService\", \"credit\") .transacted() // <-- WARNING: Transaction started in the wrong place! .bean(\"accountService\", \"debit\"); } }", "// Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { public void configure() { from(\"file:src/data?noop=true\") .transacted() .threads(3) // WARNING: Subthreads are not in transaction scope! .bean(\"accountService\", \"credit\") .bean(\"accountService\", \"debit\"); } }", "// Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { public void configure() { from(\"file:src/data?noop=true\") .transacted() .bean(\"accountService\", \"credit\") .choice().when(xpath(\"/transaction/transfer[amount > 100]\")) .to(\"direct:txbig\") .otherwise() .to(\"direct:txsmall\"); from(\"direct:txbig\") .bean(\"accountService\", \"debit\") .bean(\"accountService\", \"dumpTable\") .to(\"file:target/messages/big\"); from(\"direct:txsmall\") .bean(\"accountService\", \"debit\") .bean(\"accountService\", \"dumpTable\") .to(\"file:target/messages/small\"); } }", "from(\"file:src/data?noop=true\") .transacted() .to(\"jmstx:queue:credits\") .to(\"jmstx:queue:debits\");", "from(\"jmstx:queue:giro\") .to(\"jmstx:queue:credits\") .to(\"jmstx:queue:debits\");", "<blueprint ...> <bean id=\"jmstx\" class=\"org.apache.camel.component.jms.JmsComponent\"> <property name=\"configuration\" ref=\"jmsConfig\" /> </bean> <bean id=\"jmsConfig\" class=\"org.apache.camel.component.jms.JmsConfiguration\"> <property name=\"connectionFactory\" ref=\"jmsConnectionFactory\" /> <property name=\"transactionManager\" ref=\"jmsTransactionManager\" /> <property name=\"transacted\" value=\"true\" /> </bean> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"jmstx:queue:giro\" /> <to uri=\"jmstx:queue:credits\" /> <to uri=\"jmstx:queue:debits\" /> </route> </camelContext> </blueprint>", "from(\"jmstx:queue:giro\") .transacted() .to(\"jmstx:queue:credits\") .to(\"jmstx:queue:debits\");", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:tx=\"http://aries.apache.org/xmlns/transactions/v1.1.0\"> <bean id=\"accountFoo\" class=\"org.jboss.fuse.example.Account\"> <tx:transaction method=\"*\" value=\"Required\" /> <property name=\"accountName\" value=\"Foo\" /> </bean> <bean id=\"accountBar\" class=\"org.jboss.fuse.example.Account\"> <tx:transaction method=\"*\" value=\"Required\" /> <property name=\"accountName\" value=\"Bar\" /> </bean> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:tx=\"http://aries.apache.org/xmlns/transactions/v1.1.0\"> <tx:transaction bean=\"account*\" value=\"Required\" /> <bean id=\"accountFoo\" class=\"org.jboss.fuse.example.Account\"> <property name=\"accountName\" value=\"Foo\" /> </bean> <bean id=\"accountBar\" class=\"org.jboss.fuse.example.Account\"> <property name=\"accountName\" value=\"Bar\" /> </bean> </blueprint>", "<blueprint ...> <tx:transaction bean=\"accountFoo,accountBar\" value=\"...\" /> </blueprint>", "<blueprint ...> <tx:transaction bean=\"account*,jms*\" value=\"...\" /> </blueprint>", "<bean id=\"accountFoo\" class=\"org.jboss.fuse.example.Account\"> <tx:transaction method=\"debit,credit,transfer\" value=\"Required\" /> <property name=\"accountName\" value=\"Foo\" /> </bean>", "from(\"file:src/data?noop=true\") .transacted(\"PROPAGATION_REQUIRES_NEW\") .bean(\"accountService\",\"credit\") .bean(\"accountService\",\"debit\") .to(\"file:target/messages\");", "<blueprint ...> <bean id=\"PROPAGATION_MANDATORY \"class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\" /> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_MANDATORY\" /> </bean> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <bean id=\"PROPAGATION_MANDATORY \" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\" /> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_MANDATORY\" /> </bean> <bean id=\"PROPAGATION_NESTED\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\" /> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_NESTED\" /> </bean> <bean id=\"PROPAGATION_NEVER\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\" /> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_NEVER\" /> </bean> <bean id=\"PROPAGATION_NOT_SUPPORTED\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\" /> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_NOT_SUPPORTED\" /> </bean> <!-- This is the default behavior. --> <bean id=\"PROPAGATION_REQUIRED\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\" /> </bean> <bean id=\"PROPAGATION_REQUIRES_NEW\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\" /> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_REQUIRES_NEW\" /> </bean> <bean id=\"PROPAGATION_SUPPORTS\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\" /> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_SUPPORTS\" /> </bean> </blueprint>", "from(\"file:src/data?noop=true\") .transacted() .bean(\"accountService\",\"credit\") .transacted(\"PROPAGATION_NEVER\") .bean(\"accountService\",\"debit\");", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"file:src/data?noop=true\" /> <transacted /> <bean ref=\"accountService\" method=\"credit\" /> <transacted ref=\"PROPAGATION_NEVER\" /> <bean ref=\"accountService\" method=\"debit\" /> </route> </camelContext> </blueprint>", "from(\"file:src/data?noop=true\") .transacted() .bean(\"accountService\",\"credit\") .choice().when(xpath(\"/transaction/transfer[amount > 100]\")) .rollback() .otherwise() .to(\"direct:txsmall\"); from(\"direct:txsmall\") .bean(\"accountService\",\"debit\") .bean(\"accountService\",\"dumpTable\") .to(\"file:target/messages/small\");", "from(\"file:src/data?noop=true\") .transacted() .bean(\"accountService\",\"credit\") .choice().when(xpath(\"/transaction/transfer[amount > 100]\")) .markRollbackOnly() .otherwise() .to(\"direct:txsmall\");", "// Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { public void configure() { onException(IllegalArgumentException.class) .maximumRedeliveries(1) .handled(true) .to(\"file:target/messages?fileName=deadLetters.xml&fileExist=Append\") .markRollbackOnly(); // NB: Must come *after* the dead letter endpoint. from(\"file:src/data?noop=true\") .transacted() .bean(\"accountService\",\"credit\") .bean(\"accountService\",\"debit\") .bean(\"accountService\",\"dumpTable\") .to(\"file:target/messages\"); } }", "// Java import org.apache.camel.builder.RouteBuilder; public class MyRouteBuilder extends RouteBuilder { public void configure() { from(\"file:src/data?noop=true\") .doTry() .to(\"direct:split\") .doCatch(IllegalArgumentException.class) .to(\"file:target/messages?fileName=deadLetters.xml&fileExist=Append\") .end(); from(\"direct:split\") .transacted() .bean(\"accountService\",\"credit\") .bean(\"accountService\",\"debit\") .bean(\"accountService\",\"dumpTable\") .to(\"file:target/messages\"); } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_transaction_guide/camel-application
Appendix C. Revision History
Appendix C. Revision History Revision History Revision 6.6.0-1 Wed 7 Sep 2016 Christian Huffman Updating for 6.6.1. Revision 6.6.0-0 Thu 7 Jan 2016 Christian Huffman Initial draft for 6.6.0. Updated versions. BZ-1300709: Included note on Camel Quickstart for Fuse 6.2.1. BZ-1231129: Corrected the infinispan schema reference.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/appe-revision_history
Chapter 5. Admin Client configuration properties
Chapter 5. Admin Client configuration properties bootstrap.controllers Type: list Default: "" Importance: high A list of host/port pairs to use for establishing the initial connection to the KRaft controller quorum. This list should be in the form host1:port1,host2:port2,... . bootstrap.servers Type: list Default: "" Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . client.id Type: string Default: "" Importance: medium An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. connections.max.idle.ms Type: long Default: 300000 (5 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. default.api.timeout.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a timeout parameter. receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Valid Values: (case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. enable.metrics.push Type: boolean Default: true Importance: low Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value. retries Type: int Default: 2147483647 Valid Values: [0,... ,2147483647] Importance: low Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or MAX_VALUE and use corresponding timeout parameters to control how long a client should retry a request. retry.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms , then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_properties/admin-client-configuration-properties-str
6.15. Improving Uptime with Virtual Machine High Availability
6.15. Improving Uptime with Virtual Machine High Availability 6.15.1. What is High Availability? High availability is recommended for virtual machines running critical workloads. A highly available virtual machine is automatically restarted, either on its original host or another host in the cluster, if its process is interrupted, such as in the following scenarios: A host becomes non-operational due to hardware failure. A host is put into maintenance mode for scheduled downtime. A host becomes unavailable because it has lost communication with an external storage resource. A highly available virtual machine is not restarted if it is shut down cleanly, such as in the following scenarios: The virtual machine is shut down from within the guest. The virtual machine is shut down from the Manager. The host is shut down by an administrator without being put in maintenance mode first. With storage domains V4 or later, virtual machines have the additional capability to acquire a lease on a special volume on the storage, enabling a virtual machine to start on another host even if the original host loses power. The functionality also prevents the virtual machine from being started on two different hosts, which may lead to corruption of the virtual machine disks. With high availability, interruption to service is minimal because virtual machines are restarted within seconds with no user intervention required. High availability keeps your resources balanced by restarting guests on a host with low current resource utilization, or based on any workload balancing or power saving policies that you configure. This ensures that there is sufficient capacity to restart virtual machines at all times. High Availability and Storage I/O Errors If a storage I/O error occurs, the virtual machine is paused. You can define how the host handles highly available virtual machines after the connection with the storage domain is reestablished; they can either be resumed, ungracefully shut down, or remain paused. For more information about these options, see Virtual Machine High Availability settings explained . 6.15.2. High Availability Considerations A highly available host requires a power management device and fencing parameters. In addition, for a virtual machine to be highly available when its host becomes non-operational, it needs to be started on another available host in the cluster. To enable the migration of highly available virtual machines: Power management must be configured for the hosts running the highly available virtual machines. The host running the highly available virtual machine must be part of a cluster which has other available hosts. The destination host must be running. The source and destination host must have access to the data domain on which the virtual machine resides. The source and destination host must have access to the same virtual networks and VLANs. There must be enough CPUs on the destination host that are not in use to support the virtual machine's requirements. There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements. 6.15.3. Configuring a Highly Available Virtual Machine High availability must be configured individually for each virtual machine. Procedure Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the High Availability tab. Select the Highly Available check box to enable high availability for the virtual machine. Select the storage domain to hold the virtual machine lease, or select No VM Lease to disable the functionality, from the Target Storage Domain for VM Lease drop-down list. See What is high availability for more information about virtual machine leases. Important This functionality is only available on storage domains that are V4 or later. Select AUTO_RESUME , LEAVE_PAUSED , or KILL from the Resume Behavior drop-down list. If you defined a virtual machine lease, KILL is the only option available. For more information see Virtual Machine High Availability settings explained . Select Low , Medium , or High from the Priority drop-down list. When migration is triggered, a queue is created in which the high priority virtual machines are migrated first. If a cluster is running low on resources, only the high priority virtual machines are migrated. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-improving_uptime_with_virtual_machine_high_availability
Authentication and authorization
Authentication and authorization OpenShift Container Platform 4.10 Configuring user authentication and access controls for users and services Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/authentication_and_authorization/index
Chapter 113. Spring Batch
Chapter 113. Spring Batch Since Camel 2.10 Only producer is supported The Spring Batch component and support classes provide integration bridge between Camel and Spring Batch infrastructure. 113.1. Dependencies When using spring-batch with Red Hat build of Camel Spring Boot, use the following Maven dependency to enable support for auto-configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-batch-starter</artifactId> </dependency> 113.2. URI format Where, jobName represents the name of the Spring Batch job located in the Camel registry. If a JobRegistry is provided is used to locate the job. This component is only used to define producer endpoints, that means you cannot use the Spring Batch component in a from() statement. 113.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 113.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 113.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 113.4. Component Options The Spring Batch component supports 4 options, which are listed below. Name Description Default Type jobLauncher (producer) Explicitly specifies a JobLauncher to be used. JobLauncher jobRegistry (producer) Explicitly specifies a JobRegistry to be used. JobRegistry lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 113.5. Endpoint Options The Spring Batch endpoint is configured using URI syntax: Following are the path and query parameters: 113.5.1. Path Parameters (1 parameters) Name Description Default Type jobName (producer) Required The name of the Spring Batch job located in the registry. String 113.5.2. Query Parameters (4 parameters) Name Description Default Type jobFromHeader (producer) Explicitly defines if the jobName should be taken from the headers instead of the URI. false boolean jobLauncher (producer) Explicitly specifies a JobLauncher to be used. JobLauncher jobRegistry (producer) Explicitly specifies a JobRegistry to be used. JobRegistry lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 113.6. Usage When Spring Batch component receives the message, it triggers the job execution. The job is executed using the org.springframework.batch.core.launch.JobLaucher instance resolved according to the following algorithm. if JobLauncher is manually set on the component, then use it. if jobLauncherRef option is set on the component, then search Camel Registry for the JobLauncher with the given name. if there is JobLauncher registered in the Camel Registry under jobLauncher name, then use it. if none of the steps above allow to resolve the JobLauncher and there is exactly one JobLauncher instance in the Camel Registry, then use it. All headers found in the message are passed to the JobLauncher as job parameters. String , Long , Double and java.util.Date values are copied to the org.springframework.batch.core.JobParametersBuilder and other data types are converted to Strings. 113.7. Examples Triggering the Spring Batch job execution: from("direct:startBatch").to("spring-batch:myJob"); Triggering the Spring Batch job execution with the JobLauncher set explicitly. from("direct:startBatch").to("spring-batch:myJob?jobLauncherRef=myJobLauncher"); A JobExecution instance returned by the JobLauncher is forwarded by the SpringBatchProducer as the output message. You can use the JobExecution instance to perform some operations using the Spring Batch API directly. from("direct:startBatch").to("spring-batch:myJob").to("mock:JobExecutions"); ... MockEndpoint mockEndpoint = ...; JobExecution jobExecution = mockEndpoint.getExchanges().get(0).getIn().getBody(JobExecution.class); BatchStatus currentJobStatus = jobExecution.getStatus(); 113.8. Support classes Apart from the component, Camel Spring Batch also provides support classes that you can use to hook into Spring Batch infrastructure. 113.8.1. CamelItemReader CamelItemReader can be used to read batch data directly from the Camel infrastructure. For example, the snippet below configures Spring Batch to read data from JMS queue: <bean id="camelReader" class="org.apache.camel.component.spring.batch.support.CamelItemReader"> <constructor-arg ref="consumerTemplate"/> <constructor-arg value="jms:dataQueue"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="camelReader" writer="someWriter" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 113.8.2. CamelItemWriter CamelItemWriter has similar purpose as CamelItemReader , but it is dedicated to write chunk of the processed data. For example the snippet below configures Spring Batch to read data from JMS queue. <bean id="camelwriter" class="org.apache.camel.component.spring.batch.support.CamelItemWriter"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="jms:dataQueue"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="camelwriter" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 113.8.3. CamelItemProcessor CamelItemProcessor is the implementation of Spring Batch org.springframework.batch.item.ItemProcessor interface. The latter implementation relays on Request Reply pattern to delegate the processing of the batch item to the Camel infrastructure. The item to process is sent to the Camel endpoint as the body of the message. For example the snippet below performs simple processing of the batch item using the Direct endpoint and the Simple expression language . <camel:camelContext> <camel:route> <camel:from uri="direct:processor"/> <camel:setExchangePattern pattern="InOut"/> <camel:setBody> <camel:simple>Processed USD{body}</camel:simple> </camel:setBody> </camel:route> </camel:camelContext> <bean id="camelProcessor" class="org.apache.camel.component.spring.batch.support.CamelItemProcessor"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="direct:processor"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="someWriter" processor="camelProcessor" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 113.8.4. CamelJobExecutionListener CamelJobExecutionListener is the implementation of the org.springframework.batch.core.JobExecutionListener interface sending job execution events to the Camel endpoint. The org.springframework.batch.core.JobExecution instance produced by the Spring Batch is sent as a body of the message. To distinguish between before- and after-callbacks SPRING_BATCH_JOB_EVENT_TYPE header is set to the BEFORE or AFTER value. The example snippet below sends Spring Batch job execution events to the JMS queue. <bean id="camelJobExecutionListener" class="org.apache.camel.component.spring.batch.support.CamelJobExecutionListener"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="jms:batchEventsBus"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="someWriter" commit-interval="100"/> </batch:tasklet> </batch:step> <batch:listeners> <batch:listener ref="camelJobExecutionListener"/> </batch:listeners> </batch:job> 113.9. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.spring-batch.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.spring-batch.enabled Whether to enable auto-configuration of the spring-batch component. This is enabled by default. Boolean camel.component.spring-batch.job-launcher Explicitly specifies a JobLauncher to be used. The option is a org.springframework.batch.core.launch.JobLauncher type. JobLauncher camel.component.spring-batch.job-registry Explicitly specifies a JobRegistry to be used. The option is a org.springframework.batch.core.configuration.JobRegistry type. JobRegistry camel.component.spring-batch.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-batch-starter</artifactId> </dependency>", "spring-batch:jobName[?options]", "spring-batch:jobName", "from(\"direct:startBatch\").to(\"spring-batch:myJob\");", "from(\"direct:startBatch\").to(\"spring-batch:myJob?jobLauncherRef=myJobLauncher\");", "from(\"direct:startBatch\").to(\"spring-batch:myJob\").to(\"mock:JobExecutions\"); MockEndpoint mockEndpoint = ...; JobExecution jobExecution = mockEndpoint.getExchanges().get(0).getIn().getBody(JobExecution.class); BatchStatus currentJobStatus = jobExecution.getStatus();", "<bean id=\"camelReader\" class=\"org.apache.camel.component.spring.batch.support.CamelItemReader\"> <constructor-arg ref=\"consumerTemplate\"/> <constructor-arg value=\"jms:dataQueue\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"camelReader\" writer=\"someWriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>", "<bean id=\"camelwriter\" class=\"org.apache.camel.component.spring.batch.support.CamelItemWriter\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"jms:dataQueue\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"camelwriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>", "<camel:camelContext> <camel:route> <camel:from uri=\"direct:processor\"/> <camel:setExchangePattern pattern=\"InOut\"/> <camel:setBody> <camel:simple>Processed USD{body}</camel:simple> </camel:setBody> </camel:route> </camel:camelContext> <bean id=\"camelProcessor\" class=\"org.apache.camel.component.spring.batch.support.CamelItemProcessor\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"direct:processor\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"someWriter\" processor=\"camelProcessor\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>", "<bean id=\"camelJobExecutionListener\" class=\"org.apache.camel.component.spring.batch.support.CamelJobExecutionListener\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"jms:batchEventsBus\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"someWriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> <batch:listeners> <batch:listener ref=\"camelJobExecutionListener\"/> </batch:listeners> </batch:job>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-spring-batch-component-starter
Appendix D. Revision History
Appendix D. Revision History Revision History Revision 6.4.0-59 Fri 25 May 2018 David Le Sage Updates for 6.4
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/appe-revision_history
18.12.11.5. Writing your own filters
18.12.11.5. Writing your own filters Since libvirt only provides a couple of example networking filters, you may consider writing your own. When planning on doing so there are a couple of things you may need to know regarding the network filtering subsystem and how it works internally. Certainly you also have to know and understand the protocols very well that you want to be filtering on so that no further traffic than what you want can pass and that in fact the traffic you want to allow does pass. The network filtering subsystem is currently only available on Linux host physical machines and only works for Qemu and KVM type of virtual machines. On Linux, it builds upon the support for ebtables, iptables and ip6tables and makes use of their features. Considering the list found in Section 18.12.10, "Supported Protocols" the following protocols can be implemented using ebtables: mac stp (spanning tree protocol) vlan (802.1Q) arp, rarp ipv4 ipv6 Any protocol that runs over IPv4 is supported using iptables, those over IPv6 are implemented using ip6tables. Using a Linux host physical machine, all traffic filtering rules created by libvirt's network filtering subsystem first passes through the filtering support implemented by ebtables and only afterwards through iptables or ip6tables filters. If a filter tree has rules with the protocols including: mac, stp, vlan arp, rarp, ipv4, or ipv6; the ebtable rules and values listed will automatically be used first. Multiple chains for the same protocol can be created. The name of the chain must have a prefix of one of the previously enumerated protocols. To create an additional chain for handling of ARP traffic, a chain with name arp-test, can for example be specified. As an example, it is possible to filter on UDP traffic by source and destination ports using the IP protocol filter and specifying attributes for the protocol, source and destination IP addresses and ports of UDP packets that are to be accepted. This allows early filtering of UDP traffic with ebtables. However, once an IP or IPv6 packet, such as a UDP packet, has passed the ebtables layer and there is at least one rule in a filter tree that instantiates iptables or ip6tables rules, a rule to let the UDP packet pass will also be necessary to be provided for those filtering layers. This can be achieved with a rule containing an appropriate udp or udp-ipv6 traffic filtering node. Example 18.11. Creating a custom filter Suppose a filter is needed to fulfill the following list of requirements: prevents a VM's interface from MAC, IP and ARP spoofing opens only TCP ports 22 and 80 of a VM's interface allows the VM to send ping traffic from an interface but not let the VM be pinged on the interface allows the VM to do DNS lookups (UDP towards port 53) The requirement to prevent spoofing is fulfilled by the existing clean-traffic network filter, thus the way to do this is to reference it from a custom filter. To enable traffic for TCP ports 22 and 80, two rules are added to enable this type of traffic. To allow the guest virtual machine to send ping traffic a rule is added for ICMP traffic. For simplicity reasons, general ICMP traffic will be allowed to be initiated from the guest virtual machine, and will not be specified to ICMP echo request and response messages. All other traffic will be prevented to reach or be initiated by the guest virtual machine. To do this a rule will be added that drops all other traffic. Assuming the guest virtual machine is called test and the interface to associate our filter with is called eth0 , a filter is created named test-eth0 . The result of these considerations is the following network filter XML:
[ "<filter name='test-eth0'> <!- - This rule references the clean traffic filter to prevent MAC, IP and ARP spoofing. By not providing an IP address parameter, libvirt will detect the IP address the guest virtual machine is using. - -> <filterref filter='clean-traffic'/> <!- - This rule enables TCP ports 22 (ssh) and 80 (http) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='22'/> </rule> <rule action='accept' direction='in'> <tcp dstportstart='80'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine including ping traffic - -> <rule action='accept' direction='out'> <icmp/> </rule>> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-write-own-filters
Chapter 10. Using Streams for Apache Kafka with Kafka Connect
Chapter 10. Using Streams for Apache Kafka with Kafka Connect Use Kafka Connect to stream data between Kafka and external systems. Kafka Connect provides a framework for moving large amounts of data while maintaining scalability and reliability. Kafka Connect is typically used to integrate Kafka with database, storage, and messaging systems that are external to your Kafka cluster. Kafka Connect runs in standalone or distributed modes. Standalone mode In standalone mode, Kafka Connect runs on a single node. Standalone mode is intended for development and testing. Distributed mode In distributed mode, Kafka Connect runs across one or more worker nodes and the workloads are distributed among them. Distributed mode is intended for production. Kafka Connect uses connector plugins that implement connectivity for different types of external systems. There are two types of connector plugins: sink and source. Sink connectors stream data from Kafka to external systems. Source connectors stream data from external systems into Kafka. You can also use the Kafka Connect REST API to create, manage, and monitor connector instances. Connector configuration specifies details such as the source or sink connectors and the Kafka topics to read from or write to. How you manage the configuration depends on whether you are running Kafka Connect in standalone or distributed mode. In standalone mode, you can provide the connector configuration as JSON through the Kafka Connect REST API or you can use properties files to define the configuration. In distributed mode, you can only provide the connector configuration as JSON through the Kafka Connect REST API. Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . 10.1. Using Kafka Connect in standalone mode In Kafka Connect standalone mode, connectors run on the same node as the Kafka Connect worker process, which runs as a single process in a single JVM. This means that the worker process and connectors share the same resources, such as CPU, memory, and disk. 10.1.1. Configuring Kafka Connect in standalone mode To configure Kafka Connect in standalone mode, edit the config/connect-standalone.properties configuration file. The following options are the most important. bootstrap.servers A list of Kafka broker addresses used as bootstrap connections to Kafka. For example, kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 . key.converter The class used to convert message keys to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . offset.storage.file.filename Specifies the file in which the offset data is stored. Connector plugins open client connections to the Kafka brokers using the bootstrap address. To configure these connections, use the standard Kafka producer and consumer configuration options prefixed by producer. or consumer. . 10.1.2. Running Kafka Connect in standalone mode Configure and run Kafka Connect in standalone mode. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. You have specified connector configuration in properties files. You can also use the Kafka Connect REST API to manage connectors . Procedure Edit the ./config/connect-standalone.properties Kafka Connect configuration file and set bootstrap.server to point to your Kafka brokers. For example: bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 Start Kafka Connect with the configuration file and specify one or more connector configurations. ./bin/connect-standalone.sh ./config/connect-standalone.properties connector1.properties [connector2.properties ...] Verify that Kafka Connect is running. jcmd | grep ConnectStandalone 10.2. Using Kafka Connect in distributed mode In distributed mode, Kafka Connect runs as a cluster of worker processes, with each worker running on a separate node. Connectors can run on any worker in the cluster, allowing for greater scalability and fault tolerance. The connectors are managed by the workers, which coordinate with each other to distribute the work and ensure that each connector is running on a single node at any given time. 10.2.1. Configuring Kafka Connect in distributed mode To configure Kafka Connect in distributed mode, edit the config/connect-distributed.properties configuration file. The following options are the most important. bootstrap.servers A list of Kafka broker addresses used as bootstrap connections to Kafka. For example, kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 . key.converter The class used to convert message keys to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . group.id The name of the distributed Kafka Connect cluster. This must be unique and must not conflict with another consumer group ID. The default value is connect-cluster . config.storage.topic The Kafka topic used to store connector configurations. The default value is connect-configs . offset.storage.topic The Kafka topic used to store offsets. The default value is connect-offset . status.storage.topic The Kafka topic used for worker node statuses. The default value is connect-status . Streams for Apache Kafka includes an example configuration file for Kafka Connect in distributed mode - see config/connect-distributed.properties in the Streams for Apache Kafka installation directory. Connector plugins open client connections to the Kafka brokers using the bootstrap address. To configure these connections, use the standard Kafka producer and consumer configuration options prefixed by producer. or consumer. . 10.2.2. Running Kafka Connect in distributed mode Configure and run Kafka Connect in distributed mode. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Running the cluster Edit the ./config/connect-distributed.properties Kafka Connect configuration file on all Kafka Connect worker nodes. Set the bootstrap.server option to point to your Kafka brokers. Set the group.id option. Set the config.storage.topic option. Set the offset.storage.topic option. Set the status.storage.topic option. For example: bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 group.id=my-group-id config.storage.topic=my-group-id-configs offset.storage.topic=my-group-id-offsets status.storage.topic=my-group-id-status Start the Kafka Connect workers with the ./config/connect-distributed.properties configuration file on all Kafka Connect nodes. ./bin/connect-distributed.sh ./config/connect-distributed.properties Verify that Kafka Connect is running. jcmd | grep ConnectDistributed Use the Kafka Connect REST API to manage connectors . 10.3. Managing connectors The Kafka Connect REST API provides endpoints for creating, updating, and deleting connectors directly. You can also use the API to check the status of connectors or change logging levels. When you create a connector through the API, you provide the configuration details for the connector as part of the API call. You can also add and manage connectors as plugins. Plugins are packaged as JAR files that contain the classes to implement the connectors through the Kafka Connect API. You just need to specify the plugin in the classpath or add it to a plugin path for Kafka Connect to run the connector plugin on startup. In addition to using the Kafka Connect REST API or plugins to manage connectors, you can also add connector configuration using properties files when running Kafka Connect in standalone mode. To do this, you simply specify the location of the properties file when starting the Kafka Connect worker process. The properties file should contain the configuration details for the connector, including the connector class, source and destination topics, and any required authentication or serialization settings. 10.3.1. Limiting access to the Kafka Connect API The Kafka Connect REST API can be accessed by anyone who has authenticated access and knows the endpoint URL, which includes the hostname/IP address and port number. It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. For improved security, we recommend configuring the following properties for the Kafka Connect API: (Kafka 3.4 or later) org.apache.kafka.disallowed.login.modules to specifically exclude insecure login modules connector.client.config.override.policy set to NONE to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses 10.3.2. Configuring connectors Use the Kafka Connect REST API or properties files to create, manage, and monitor connector instances. You can use the REST API when using Kafka Connect in standalone or distributed mode. You can use properties files when using Kafka Connect in standalone mode. 10.3.2.1. Using the Kafka Connect REST API to manage connectors When using the Kafka Connect REST API, you can create connectors dynamically by sending PUT or POST HTTP requests to the Kafka Connect REST API, specifying the connector configuration details in the request body. Tip When you use the PUT command, it's the same command for starting and updating connectors. The REST interface listens on port 8083 by default and supports the following endpoints: GET /connectors Return a list of existing connectors. POST /connectors Create a connector. The request body has to be a JSON object with the connector configuration. GET /connectors/<connector_name> Get information about a specific connector. GET /connectors/<connector_name>/config Get configuration of a specific connector. PUT /connectors/<connector_name>/config Update the configuration of a specific connector. GET /connectors/<connector_name>/status Get the status of a specific connector. GET /connectors/<connector_name>/tasks Get a list of tasks for a specific connector GET /connectors/<connector_name>/tasks/ <task_id> /status Get the status of a task for a specific connector PUT /connectors/<connector_name>/pause Pause the connector and all its tasks. The connector will stop processing any messages. PUT /connectors/<connector_name>/stop Stop the connector and all its tasks. The connector will stop processing any messages. Stopping a connector from running may be more suitable for longer durations than just pausing. PUT /connectors/<connector_name>/resume Resume a paused connector. POST /connectors/<connector_name>/restart Restart a connector in case it has failed. POST /connectors/<connector_name>/tasks/ <task_id> /restart Restart a specific task. DELETE /connectors/<connector_name> Delete a connector. GET /connectors/<connector_name>/topics Get the topics for a specific connector. PUT /connectors/<connector_name>/topics/reset Empty the set of active topics for a specific connector. GET /connectors/<connector_name>/offsets Get the current offsets for a connector. DELETE /connectors/<connector_name>/offsets Reset the offsets for a connector, which must be in a stopped state. PATCH /connectors/<connector_name>/offsets Adjust the offsets (using an offset property in the request) for a connector, which must be in a stopped state. GET /connector-plugins Get a list of all supported connector plugins. GET /connector-plugins/<connector_plugin_type>/config Get the configuration for a connector plugin. PUT /connector-plugins/<connector_type>/config/validate Validate connector configuration. 10.3.2.2. Specifying connector configuration properties To configure a Kafka Connect connector, you need to specify the configuration details for source or sink connectors. There are two ways to do this: through the Kafka Connect REST API, using JSON to provide the configuration, or by using properties files to define the configuration properties. The specific configuration options available for each type of connector may differ, but both methods provide a flexible way to specify the necessary settings. The following options apply to all connectors: name The name of the connector, which must be unique within the current Kafka Connect instance. connector.class The class of the connector plug-in. For example, org.apache.kafka.connect.file.FileStreamSinkConnector . tasks.max The maximum number of tasks that the specified connector can use. Tasks enable the connector to perform work in parallel. The connector might create fewer tasks than specified. key.converter The class used to convert message keys to and from Kafka format. This overrides the default value set by the Kafka Connect configuration. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. This overrides the default value set by the Kafka Connect configuration. For example, org.apache.kafka.connect.json.JsonConverter . You must set at least one of the following options for sink connectors: topics A comma-separated list of topics used as input. topics.regex A Java regular expression of topics used as input. For all other options, see the connector properties in the Apache Kafka documentation . Note Streams for Apache Kafka includes the example connector configuration files config/connect-file-sink.properties and config/connect-file-source.properties in the Streams for Apache Kafka installation directory. Additional resources Kafka Connect REST API OpenAPI documentation 10.3.3. Creating connectors using the Kafka Connect API Use the Kafka Connect REST API to create a connector to use with Kafka Connect. Prerequisites A Kafka Connect installation. Procedure Prepare a JSON payload with the connector configuration. For example: { "name": "my-connector", "config": { "connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector", "tasks.max": "1", "topics": "my-topic-1,my-topic-2", "file": "/tmp/output-file.txt" } } Send a POST request to <KafkaConnectAddress> :8083/connectors to create the connector. The following example uses curl : curl -X POST -H "Content-Type: application/json" --data @sink-connector.json http://connect0.my-domain.com:8083/connectors Verify that the connector was deployed by sending a GET request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors 10.3.4. Deleting connectors using the Kafka Connect API Use the Kafka Connect REST API to delete a connector from Kafka Connect. Prerequisites A Kafka Connect installation. Deleting connectors Verify that the connector exists by sending a GET request to <KafkaConnectAddress> :8083/connectors/ <ConnectorName> . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors To delete the connector, send a DELETE request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl -X DELETE http://connect0.my-domain.com:8083/connectors/my-connector Verify that the connector was deleted by sending a GET request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors 10.3.5. Adding connector plugins Kafka provides example connectors to use as a starting point for developing connectors. The following example connectors are included with Streams for Apache Kafka: FileStreamSink Reads data from Kafka topics and writes the data to a file. FileStreamSource Reads data from a file and sends the data to Kafka topics. Both connectors are contained in the libs/connect-file-<kafka_version>.redhat-<build>.jar plugin. To use the connector plugins in Kafka Connect, you can add them to the classpath or specify a plugin path in the Kafka Connect properties file and copy the plugins to the location. Specifying the example connectors in the classpath CLASSPATH=/opt/kafka/libs/connect-file-<kafka_version>.redhat-<build>.jar opt/kafka/bin/connect-distributed.sh Setting a plugin path plugin.path=/opt/kafka/connector-plugins,/opt/connectors The plugin.path configuration option can contain a comma-separated list of paths. You can add more connector plugins if needed. Kafka Connect searches for and runs connector plugins at startup. Note When running Kafka Connect in distributed mode, plugins must be made available on all worker nodes.
[ "bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092", "./bin/connect-standalone.sh ./config/connect-standalone.properties connector1.properties [connector2.properties ...]", "jcmd | grep ConnectStandalone", "bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 group.id=my-group-id config.storage.topic=my-group-id-configs offset.storage.topic=my-group-id-offsets status.storage.topic=my-group-id-status", "./bin/connect-distributed.sh ./config/connect-distributed.properties", "jcmd | grep ConnectDistributed", "{ \"name\": \"my-connector\", \"config\": { \"connector.class\": \"org.apache.kafka.connect.file.FileStreamSinkConnector\", \"tasks.max\": \"1\", \"topics\": \"my-topic-1,my-topic-2\", \"file\": \"/tmp/output-file.txt\" } }", "curl -X POST -H \"Content-Type: application/json\" --data @sink-connector.json http://connect0.my-domain.com:8083/connectors", "curl http://connect0.my-domain.com:8083/connectors", "curl http://connect0.my-domain.com:8083/connectors", "curl -X DELETE http://connect0.my-domain.com:8083/connectors/my-connector", "curl http://connect0.my-domain.com:8083/connectors", "CLASSPATH=/opt/kafka/libs/connect-file-<kafka_version>.redhat-<build>.jar opt/kafka/bin/connect-distributed.sh", "plugin.path=/opt/kafka/connector-plugins,/opt/connectors" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-kafka-connect-str
Chapter 14. Understanding low latency tuning for cluster nodes
Chapter 14. Understanding low latency tuning for cluster nodes Edge computing has a key role in reducing latency and congestion problems and improving application performance for telco and 5G network applications. Maintaining a network architecture with the lowest possible latency is key for meeting the network performance requirements of 5G. Compared to 4G technology, with an average latency of 50 ms, 5G is targeted to reach latency of 1 ms or less. This reduction in latency boosts wireless throughput by a factor of 10. 14.1. About low latency Many of the deployed applications in the Telco space require low latency that can only tolerate zero packet loss. Tuning for zero packet loss helps mitigate the inherent issues that degrade network performance. For more information, see Tuning for Zero Packet Loss in Red Hat OpenStack Platform (RHOSP) . The Edge computing initiative also comes in to play for reducing latency rates. Think of it as being on the edge of the cloud and closer to the user. This greatly reduces the distance between the user and distant data centers, resulting in reduced application response times and performance latency. Administrators must be able to manage their many Edge sites and local services in a centralized way so that all of the deployments can run at the lowest possible management cost. They also need an easy way to deploy and configure certain nodes of their cluster for real-time low latency and high-performance purposes. Low latency nodes are useful for applications such as Cloud-native Network Functions (CNF) and Data Plane Development Kit (DPDK). OpenShift Container Platform currently provides mechanisms to tune software on an OpenShift Container Platform cluster for real-time running and low latency (around <20 microseconds reaction time). This includes tuning the kernel and OpenShift Container Platform set values, installing a kernel, and reconfiguring the machine. But this method requires setting up four different Operators and performing many configurations that, when done manually, is complex and could be prone to mistakes. OpenShift Container Platform uses the Node Tuning Operator to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator uses this performance profile configuration that makes it easier to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads. Important In OpenShift Container Platform 4.14, if you apply a performance profile to your cluster, all nodes in the cluster will reboot. This reboot includes control plane nodes and worker nodes that were not targeted by the performance profile. This is a known issue in OpenShift Container Platform 4.14 because this release uses Linux control group version 2 (cgroup v2) in alignment with RHEL 9. The low latency tuning features associated with the performance profile do not support cgroup v2, therefore the nodes reboot to switch back to the cgroup v1 configuration. To revert all nodes in the cluster to the cgroups v2 configuration, you must edit the Node resource. ( OCPBUGS-16976 ) Note In Telco, clusters using PerformanceProfile for low latency, real-time, and Data Plane Development Kit (DPDK) workloads automatically revert to cgroups v1 due to the lack of cgroups v2 support. Enabling cgroup v2 is not supported if you are using PerformanceProfile . OpenShift Container Platform also supports workload hints for the Node Tuning Operator that can tune the PerformanceProfile to meet the demands of different industry environments. Workload hints are available for highPowerConsumption (very low latency at the cost of increased power consumption) and realTime (priority given to optimum latency). A combination of true/false settings for these hints can be used to deal with application-specific workload profiles and requirements. Workload hints simplify the fine-tuning of performance to industry sector settings. Instead of a "one size fits all" approach, workload hints can cater to usage patterns such as placing priority on: Low latency Real-time capability Efficient use of power Ideally, all of the previously listed items are prioritized. Some of these items come at the expense of others however. The Node Tuning Operator is now aware of the workload expectations and better able to meet the demands of the workload. The cluster admin can now specify into which use case that workload falls. The Node Tuning Operator uses the PerformanceProfile to fine tune the performance settings for the workload. The environment in which an application is operating influences its behavior. For a typical data center with no strict latency requirements, only minimal default tuning is needed that enables CPU partitioning for some high performance workload pods. For data centers and workloads where latency is a higher priority, measures are still taken to optimize power consumption. The most complicated cases are clusters close to latency-sensitive equipment such as manufacturing machinery and software-defined radios. This last class of deployment is often referred to as Far edge. For Far edge deployments, ultra-low latency is the ultimate priority, and is achieved at the expense of power management. 14.2. About Hyper-Threading for low latency and real-time applications Hyper-Threading is an Intel processor technology that allows a physical CPU processor core to function as two logical cores, executing two independent threads simultaneously. Hyper-Threading allows for better system throughput for certain workload types where parallel processing is beneficial. The default OpenShift Container Platform configuration expects Hyper-Threading to be enabled. For telecommunications applications, it is important to design your application infrastructure to minimize latency as much as possible. Hyper-Threading can slow performance times and negatively affect throughput for compute-intensive workloads that require low latency. Disabling Hyper-Threading ensures predictable performance and can decrease processing times for these workloads. Note Hyper-Threading implementation and configuration differs depending on the hardware you are running OpenShift Container Platform on. Consult the relevant host hardware tuning information for more details of the Hyper-Threading implementation specific to that hardware. Disabling Hyper-Threading can increase the cost per core of the cluster. Additional resources Configuring Hyper-Threading for a cluster
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/cnf-understanding-low-latency
Chapter 18. Changing a hostname
Chapter 18. Changing a hostname The hostname of a system is the name on the system itself. You can set the name when you install RHEL, and you can change it afterwards. 18.1. Changing a hostname by using nmcli You can use the nmcli utility to update the system hostname. Note that other utilities might use a different term, such as static or persistent hostname. Procedure Optional: Display the current hostname setting: Set the new hostname: NetworkManager automatically restarts the systemd-hostnamed to activate the new name. For the changes to take effect, reboot the host: Alternatively, if you know which services use the hostname: Restart all services that only read the hostname when the service starts: Active shell users must re-login for the changes to take effect. Verification Display the hostname: 18.2. Changing a hostname by using hostnamectl You can use the hostnamectl utility to update the hostname. By default, this utility sets the following hostname types: Static hostname: Stored in the /etc/hostname file. Typically, services use this name as the hostname. Pretty hostname: A descriptive name, such as Proxy server in data center A . Transient hostname: A fall-back value that is typically received from the network configuration. Procedure Optional: Display the current hostname setting: Set the new hostname: This command sets the static, pretty, and transient hostname to the new value. To set only a specific type, pass the --static , --pretty , or --transient option to the command. The hostnamectl utility automatically restarts the systemd-hostnamed to activate the new name. For the changes to take effect, reboot the host: Alternatively, if you know which services use the hostname: Restart all services that only read the hostname when the service starts: Active shell users must re-login for the changes to take effect. Verification Display the hostname:
[ "nmcli general hostname old-hostname.example.com", "nmcli general hostname new-hostname.example.com", "reboot", "systemctl restart <service_name>", "nmcli general hostname new-hostname.example.com", "hostnamectl status --static old-hostname.example.com", "hostnamectl set-hostname new-hostname.example.com", "reboot", "systemctl restart <service_name>", "hostnamectl status --static new-hostname.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_changing-a-hostname_configuring-and-managing-networking
Chapter 11. Updating an RPM installation
Chapter 11. Updating an RPM installation Before updating your current JBoss EAP instance by using the RPM installation method, check that your system meets certain setup prerequisites. Prerequisites You installed JBoss EAP server using RPM installation method. The base operating system is up to date, and you get updates from the standard Red Hat Enterprise Linux repositories. You are subscribed to the relevant JBoss EAP repository for the update. If you are subscribed to a minor JBoss EAP repository, you have changed to the latest minor repository to get the update. Important For a managed domain, update the JBoss EAP domain controller before you update to a newer release of JBoss EAP. An updated JBoss EAP 8.0 domain controller can still manage other JBoss EAP 8.0 hosts in a managed domain, as long as the domain controller is running the same or more recent version than the rest of the domain. Procedure Update your current JBoss EAP version to the newer JBoss EAP version by issuing the following command in your terminal: Enable new features in the updated release, such as new subsystems, by manually merging each .rpmnew file into your existing configuration files. The RPM update process does not replace any of your modified JBoss EAP configuration files, but it creates .rpmnew files based on the default configuration of your updated JBoss EAP instance. Additional resources For more information, see Installing JBoss EAP 8.0 using the RPM Installation installation method .
[ "dnf update" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/updating-an-rpm-installation_default
Appendix B. Using Red Hat Enterprise Linux packages
Appendix B. Using Red Hat Enterprise Linux packages This section describes how to use software delivered as RPM packages for Red Hat Enterprise Linux. To ensure the RPM packages for this product are available, you must first register your system . B.1. Overview A component such as a library or server often has multiple packages associated with it. You do not have to install them all. You can install only the ones you need. The primary package typically has the simplest name, without additional qualifiers. This package provides all the required interfaces for using the component at program run time. Packages with names ending in -devel contain headers for C and C++ libraries. These are required at compile time to build programs that depend on this package. Packages with names ending in -docs contain documentation and example programs for the component. For more information about using RPM packages, see one of the following resources: Red Hat Enterprise Linux 7 - Installing and managing software Red Hat Enterprise Linux 8 - Managing software packages B.2. Searching for packages To search for packages, use the yum search command. The search results include package names, which you can use as the value for <package> in the other commands listed in this section. USD yum search <keyword>... B.3. Installing packages To install packages, use the yum install command. USD sudo yum install <package>... B.4. Querying package information To list the packages installed in your system, use the rpm -qa command. USD rpm -qa To get information about a particular package, use the rpm -qi command. USD rpm -qi <package> To list all the files associated with a package, use the rpm -ql command. USD rpm -ql <package>
[ "yum search <keyword>", "sudo yum install <package>", "rpm -qa", "rpm -qi <package>", "rpm -ql <package>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_python_client/using_red_hat_enterprise_linux_packages
6.2. Confining New Linux Users: useradd
6.2. Confining New Linux Users: useradd Linux users mapped to the SELinux unconfined_u user run in the unconfined_t domain. This is seen by running the id -Z command while logged-in as a Linux user mapped to unconfined_u : When Linux users run in the unconfined_t domain, SELinux policy rules are applied, but policy rules exist that allow Linux users running in the unconfined_t domain almost all access. If unconfined Linux users execute an application that SELinux policy defines can transition from the unconfined_t domain to its own confined domain, unconfined Linux users are still subject to the restrictions of that confined domain. The security benefit of this is that, even though a Linux user is running unconfined, the application remains confined, and therefore, the exploitation of a flaw in the application can be limited by policy. Note This does not protect the system from the user. Instead, the user and the system are being protected from possible damage caused by a flaw in the application. When creating Linux users with the useradd command, use the -Z option to specify which SELinux user they are mapped to. The following example creates a new Linux user, useruuser , and maps that user to the SELinux user_u user. Linux users mapped to the SELinux user_u user run in the user_t domain. In this domain, Linux users are unable to run setuid applications unless SELinux policy permits it (such as passwd ), and cannot run the su or sudo command, preventing them from becoming the root user with these commands. Procedure 6.1. Confining a New Linux User to user_u SELinux User As root, create a new Linux user ( useruuser ) that is mapped to the SELinux user_u user. To view the mapping between useruuser and user_u , enter the following command as root: As root, assign a password to the Linux useruuser user: Log out of your current session, and log in as the Linux useruuser user. When you log in, the pam_selinux module maps the Linux user to an SELinux user (in this case, user_u ), and sets up the resulting SELinux context. The Linux user's shell is then launched with this context. Enter the following command to view the context of a Linux user: Log out of the Linux useruuser 's session, and log back in with your account. If you do not want the Linux useruuser user, enter the following command as root to remove it, along with its home directory:
[ "~]USD id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "~]# useradd -Z user_u useruuser", "~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 * useruuser user_u s0 *", "~]# passwd useruuser Changing password for user useruuser. New password: Enter a password Retype new password: Enter the same password again passwd: all authentication tokens updated successfully.", "~]USD id -Z user_u:user_r:user_t:s0", "~]# userdel -Z -r useruuser" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-confining_users-confining_new_linux_users_useradd
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 6-1 Wed Aug 7 2019 Steven Levine Preparing document for 7.7 GA publication. Revision 5-2 Thu Oct 4 2018 Steven Levine Preparing document for 7.6 GA publication. Revision 4-2 Wed Mar 14 2018 Steven Levine Preparing document for 7.5 GA publication. Revision 4-1 Thu Dec 14 2017 Steven Levine Preparing document for 7.5 Beta publication. Revision 3-4 Wed Aug 16 2017 Steven Levine Updated version for 7.4. Revision 3-3 Wed Jul 19 2017 Steven Levine Document version for 7.4 GA publication. Revision 3-1 Wed May 10 2017 Steven Levine Preparing document for 7.4 Beta publication. Revision 2-6 Mon Apr 17 2017 Steven Levine Update for 7.3 Revision 2-4 Mon Oct 17 2016 Steven Levine Version for 7.3 GA publication. Revision 2-3 Fri Aug 12 2016 Steven Levine Preparing document for 7.3 Beta publication. Revision 1.2-3 Mon Nov 9 2015 Steven Levine Preparing document for 7.2 GA publication. Revision 1.2-2 Tue Aug 18 2015 Steven Levine Preparing document for 7.2 Beta publication. Revision 1.1-19 Mon Feb 16 2015 Steven Levine Version for 7.1 GA release Revision 1.1-10 Thu Dec 11 2014 Steven Levine Version for 7.1 Beta release Revision 0.1-33 Mon Jun 2 2014 Steven Levine Version for 7.0 GA release
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/appe-publican-revision_history
Chapter 4. Migrating isolated nodes to execution nodes
Chapter 4. Migrating isolated nodes to execution nodes Upgrading from version 1.x to the latest version of the Red Hat Ansible Automation Platform requires platform administrators to migrate data from isolated legacy nodes to execution nodes. This migration is necessary to deploy the automation mesh. This guide explains how to perform a side-by-side migration. This ensures that the data on your original automation environment remains untouched during the migration process. The migration process involves the following steps: Verify upgrade configurations. Backup original instance. Deploy new instance for a side-by-side upgrade. Recreate instance groups in the new instance using ansible controller. Restore original backup to new instance. Set up execution nodes and upgrade instance to Red Hat Ansible Automation Platform 2.3. Configure upgraded controller instance. 4.1. Prerequisites for upgrading Ansible Automation Platform Before you begin to upgrade Ansible Automation Platform, ensure your environment meets the following node and configuration requirements. 4.1.1. Node requirements The following specifications are required for the nodes involved in the Ansible Automation Platform upgrade process: 16 GB of RAM for controller nodes, database node, execution nodes and hop nodes. 4 CPUs for controller nodes, database nodes, execution nodes, and hop nodes. 150 GB+ disk space for database node. 40 GB+ disk space for non-database nodes. DHCP reservations use infinite leases to deploy the cluster with static IP addresses. DNS records for all nodes. Red Hat Enterprise Linux 8 or later 64-bit (x86) installed for all nodes. Chrony configured for all nodes. Python 3.9 or later for all content dependencies. 4.1.2. Automation controller configuration requirements The following automation controller configurations are required before you proceed with the Ansible Automation Platform upgrade process: Configuring NTP server using Chrony Each Ansible Automation Platform node in the cluster must have access to an NTP server. Use the chronyd to synchronize the system clock with NTP servers. This ensures that cluster nodes using SSL certificates that require validation do not fail if the date and time between nodes are not in sync. This is required for all nodes used in the upgraded Ansible Automation Platform cluster: Install chrony : # dnf install chrony --assumeyes Open /etc/chrony.conf using a text editor. Locate the public server pool section and modify it to include the appropriate NTP server addresses. Only one server is required, but three are recommended. Add the 'iburst' option to speed up the time it takes to properly sync with the servers: # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server <ntp-server-address> iburst Save changes within the /etc/chrony.conf file. Start the host and enable the chronyd daemon: # systemctl --now enable chronyd.service Verify the chronyd daemon status: # systemctl status chronyd.service Attaching Red Hat subscription on all nodes Red Hat Ansible Automation Platform requires you to have valid subscriptions attached to all nodes. You can verify that your current node has a Red Hat subscription by running the following command: # subscription-manager list --consumed If there is no Red Hat subscription attached to the node, see Attaching your Ansible Automation Platform subscription for more information. Creating non-root user with sudo privileges Before you upgrade Ansible Automation Platform, it is recommended to create a non-root user with sudo privileges for the deployment process. This user is used for: SSH connectivity. Passwordless authentication during installation. Privilege escalation (sudo) permissions. The following example uses ansible to name this user. On all nodes used in the upgraded Ansible Automation Platform cluster, create a non-root user named ansible and generate an ssh key: Create a non-root user: # useradd ansible Set a password for your user: # passwd ansible 1 Changing password for ansible. Old Password: New Password: Retype New Password: 1 Replace ansible with the non-root user from step 1, if using a different name Generate an ssh key as the user: USD ssh-keygen -t rsa Disable password requirements when using sudo : # echo "ansible ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/ansible Copying SSH keys to all nodes With the ansible user created, copy the ssh key to all the nodes used in the upgraded Ansible Automation Platform cluster. This ensures that when the Ansible Automation Platform installation runs, it can ssh to all the nodes without a password: USD ssh-copy-id [email protected] Note If running within a cloud provider, you might need to instead create an ~/.ssh/authorized_keys file containing the public key for the ansible user on all your nodes and set the permissions to the authorized_keys file to only the owner ( ansible ) having read and write access (permissions 600). Configuring firewall settings Configure the firewall settings on all the nodes used in the upgraded Ansible Automation Platform cluster to permit access to the appropriate services and ports for a successful Ansible Automation Platform upgrade. For Red Hat Enterprise Linux 8 or later, enable the firewalld daemon to enable the access needed for all nodes: Install the firewalld package: # dnf install firewalld --assumeyes Start the firewalld service: # systemctl start firewalld Enable the firewalld service: # systemctl enable --now firewalld 4.1.3. Ansible Automation Platform configuration requirements The following Ansible Automation Platform configurations are required before you proceed with the Ansible Automation Platform upgrade process: Configuring firewall settings for execution and hop nodes After upgrading your Red Hat Ansible Automation Platform instance, add the automation mesh port on the mesh nodes (execution and hop nodes) to enable automation mesh functionality. The default port used for the mesh networks on all nodes is 27199/tcp . You can configure the mesh network to use a different port by specifying recptor_listener_port as the variable for each node within your inventory file. Within your hop and execution node set the firewalld port to be used for installation. Ensure that firewalld is running: USD sudo systemctl status firewalld Add the firewalld port to your controller database node (e.g. port 27199): USD sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp Reload firewalld : USD sudo firewall-cmd --reload Confirm that the port is open: USD sudo firewall-cmd --list-ports 4.2. Back up your Ansible Automation Platform instance Back up an existing Ansible Automation Platform instance by running the .setup.sh script with the backup_dir flag, which saves the content and configuration of your current environment: Navigate to your ansible-tower-setup-latest directory. Run the ./setup.sh script following the example below: USD ./setup.sh -e 'backup_dir=/ansible/mybackup' -e 'use_compression=True' @credentials.yml -b 1 2 1 backup_dir specifies a directory to save your backup to. 2 @credentials.yml passes the password variables and their values encrypted via ansible-vault . With a successful backup, a backup file is created at /ansible/mybackup/tower-backup-latest.tar.gz . This backup will be necessary later to migrate content from your old instance to the new one. 4.3. Deploy a new instance for a side-by-side upgrade To proceed with the side-by-side upgrade process, deploy a second instance of Ansible Tower 3.8.x with the same instance group configurations. This new instance will receive the content and configuration from your original instance, and will later be upgraded to Red Hat Ansible Automation Platform 2.3. 4.3.1. Deploy a new instance of Ansible Tower To deploy a new Ansible Tower instance, do the following: Download the Tower installer version that matches your original Tower instance by navigating to the Ansible Tower installer page . Navigate to the installer, then open the inventory file using a text editor to configure the inventory file for a Tower installation: In addition to any Tower configurations, remove any fields containing isolated_group or instance_group . Note For more information about installing Tower using the Ansible Automation Platform installer, see the Ansible Automation Platform Installation Guide for your specific installation scenario. Run the setup.sh script to begin the installation. Once the new instance is installed, configure the Tower settings to match the instance groups from your original Tower instance. 4.3.2. Recreate instance groups in the new instance To recreate your instance groups in the new instance, do the following: Note Make note of all instance groups from your original Tower instance. You will need to recreate these groups in your new instance. Log in to your new instance of Tower. In the navigation pane, select Administration Instance groups . Click Create instance group . Enter a Name that matches an instance group from your original instance, then click Save Repeat until all instance groups from your original instance have been recreated. 4.4. Restore backup to new instance Running the ./setup.sh script with the restore_backup_file flag migrates content from the backup file of your original 1.x instance to the new instance. This effectively migrates all job histories, templates, and other Ansible Automation Platform related content. Procedure Run the following command: USD ./setup.sh -r -e 'restore_backup_file=/ansible/mybackup/tower-backup-latest.tar.gz' -e 'use_compression=True' -e @credentials.yml -r -- --ask-vault-pass 1 2 3 1 restore_backup_file specifies the location of the Ansible Automation Platform backup database 2 use_compression is set to True due to compression being used during the backup process 3 -r sets the restore database option to True Log in to your new RHEL 8 Tower 3.8 instance to verify whether the content from your original instance has been restored: Navigate to Administration Instance groups . The recreated instance groups should now contain the Total Jobs from your original instance. Using the side navigation panel, check that your content has been imported from your original instance, including Jobs, Templates, Inventories, Credentials, and Users. You now have a new instance of Ansible Tower with all the Ansible content from your original instance. You will upgrade this new instance to Ansible Automation Platform 2.3 so that you keep all your data without overwriting your original instance. 4.5. Upgrading to Ansible Automation Platform 2.3 To upgrade your instance of Ansible Tower to Ansible Automation Platform 2.3, copy the inventory file from your original Tower instance to your new Tower instance and run the installer. The Red Hat Ansible Automation Platform installer detects a pre-2.3 and offers an upgraded inventory file to continue with the upgrade process: Download the latest installer for Red Hat Ansible Automation Platform from the Red Hat Customer Portal . Extract the files: USD tar xvzf ansible-automation-platform-setup- <latest_version >.tar.gz Navigate into your Ansible Automation Platform installation directory: USD cd ansible-automation-platform-setup- <latest_version> / Copy the inventory file from your original instance into the directory of the latest installer: USD cp ansible-tower-setup-3.8.x.x/inventory ansible-automation-platform-setup- <latest_version> Run the setup.sh script: USD ./setup.sh The setup script pauses and indicates that a "pre-2.x" inventory file was detected, but offers a new file called inventory.new.ini allowing you to continue to upgrade your original instance. Open inventory.new.ini with a text editor. Note By running the setup script, the Installer modified a few fields from your original inventory file, such as renaming [tower] to [automationcontroller]. Modify the newly generated inventory.new.ini file to configure your automation mesh by assigning relevant variables, nodes, and relevant node-to-node peer connections: Note The design of your automation mesh topology depends on the automation needs of your environment. It is beyond the scope of this document to provide designs for all possible scenarios. The following is one example automation mesh design. Review the full Ansible Automation Platform automation mesh guide for information on designing it for your needs. Example inventory file with a standard control plane consisting of three nodes utilizing hop nodes: 1 Specifies a control node that runs project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes. 2 Specifies peer relationships for node-to-node connections in the [execution_nodes] group. 3 Specifies hop nodes that route traffic to other execution nodes. Hop nodes cannot execute automation. Import or generate a automation hub API token. Import an existing API token with the automationhub_api_token flag: automationhub_api_token=<api_token> Generate a new API token, and invalidate any existing tokens, by setting the generate_automationhub_token flag to True : generate_automationhub_token=True Once you have finished configuring your inventory.new.ini for automation mesh, run the setup script using inventory.new.ini : USD ./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-pass Once the installation completes, verify that your Ansible Automation Platform has been installed successfully by logging in to the Ansible Automation Platform dashboard UI across all automation controller nodes. Additional resources For general information on using the Ansible Automation Platform installer, see the Red Hat Ansible Automation Platform installation guide . For more information about automation mesh, see the Ansible Automation Platform automation mesh guide 4.6. Configuring your upgraded Ansible Automation Platform 4.6.1. Configuring automation controller instance groups After upgrading your Red Hat Ansible Automation Platform instance, associate your original instances to its corresponding instance groups by configuring settings in the automation controller UI: Log into the new Controller instance. Content from old instance, such as credentials, jobs, inventories should now be visible on your Controller instance. Navigate to Administration Instance Groups . Associate execution nodes by clicking on an instance group, then click the Instances tab. Click Associate . Select the node(s) to associate to this instance group, then click Save . You can also modify the default instance to disassociate your new execution nodes.
[ "dnf install chrony --assumeyes", "Use public servers from the pool.ntp.org project. Please consider joining the pool (http://www.pool.ntp.org/join.html). server <ntp-server-address> iburst", "systemctl --now enable chronyd.service", "systemctl status chronyd.service", "subscription-manager list --consumed", "useradd ansible", "passwd ansible 1 Changing password for ansible. Old Password: New Password: Retype New Password:", "ssh-keygen -t rsa", "echo \"ansible ALL=(ALL) NOPASSWD:ALL\" | sudo tee -a /etc/sudoers.d/ansible", "ssh-copy-id [email protected]", "dnf install firewalld --assumeyes", "systemctl start firewalld", "systemctl enable --now firewalld", "sudo systemctl status firewalld", "sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp", "sudo firewall-cmd --reload", "sudo firewall-cmd --list-ports", "./setup.sh -e 'backup_dir=/ansible/mybackup' -e 'use_compression=True' @credentials.yml -b 1 2", "./setup.sh -r -e 'restore_backup_file=/ansible/mybackup/tower-backup-latest.tar.gz' -e 'use_compression=True' -e @credentials.yml -r -- --ask-vault-pass 1 2 3", "tar xvzf ansible-automation-platform-setup- <latest_version >.tar.gz", "cd ansible-automation-platform-setup- <latest_version> /", "cp ansible-tower-setup-3.8.x.x/inventory ansible-automation-platform-setup- <latest_version>", "./setup.sh", "[automationcontroller] control-plane-1.example.com control-plane-2.example.com control-plane-3.example.com [automationcontroller:vars] node_type=control 1 peers=execution_nodes 2 [execution_nodes] execution-node-1.example.com peers=execution-node-2.example.com execution-node-2.example.com peers=execution-node-3.example.com execution-node-3.example.com peers=execution-node-4.example.com execution-node-4.example.com peers=execution-node-5.example.com node_type=hop execution-node-5.example.com peers=execution-node-6.example.com node_type=hop 3 execution-node-6.example.com peers=execution-node-7.example.com execution-node-7.example.com [execution_nodes:vars] node_type=execution", "automationhub_api_token=<api_token>", "generate_automationhub_token=True", "./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-pass" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/migrate-isolated-execution-nodes
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/using_octavia_for_load_balancing-as-a-service/proc_providing-feedback-on-red-hat-documentation
Chapter 5. Preparing to update to OpenShift Container Platform 4.13
Chapter 5. Preparing to update to OpenShift Container Platform 4.13 Learn more about administrative tasks that cluster admins must perform to successfully initialize an update, as well as optional guidelines for ensuring a successful update. 5.1. RHEL 9.2 micro-architecture requirement change OpenShift Container Platform is now based on the RHEL 9.2 host operating system. The micro-architecture requirements are now increased to x86_64-v2, Power9, and Z14. See the RHEL micro-architecture requirements documentation . You can verify compatibility before updating by following the procedures outlined in this KCS article . Important Without the correct micro-architecture requirements, the update process will fail. Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures 5.2. Kubernetes API deprecations and removals OpenShift Container Platform 4.13 uses Kubernetes 1.26, which removed several deprecated APIs. A cluster administrator must provide a manual acknowledgment before the cluster can be updated from OpenShift Container Platform 4.12 to 4.13. This is to help prevent issues after upgrading to OpenShift Container Platform 4.13, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this evaluation and migration is complete, the administrator can provide the acknowledgment. Before you can update your OpenShift Container Platform 4.12 cluster to 4.13, you must provide the administrator acknowledgment. 5.2.1. Removed Kubernetes APIs OpenShift Container Platform 4.13 uses Kubernetes 1.26, which removed the following deprecated APIs. You must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the Kubernetes documentation . Table 5.1. APIs removed from Kubernetes 1.26 Resource Removed API Migrate to FlowSchema flowcontrol.apiserver.k8s.io/v1beta1 flowcontrol.apiserver.k8s.io/v1beta3 HorizontalPodAutoscaler autoscaling/v2beta2 autoscaling/v2 PriorityLevelConfiguration flowcontrol.apiserver.k8s.io/v1beta1 flowcontrol.apiserver.k8s.io/v1beta3 5.2.2. Evaluating your cluster for removed APIs There are several methods to help administrators identify where APIs that will be removed are in use. However, OpenShift Container Platform cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs. 5.2.2.1. Reviewing alerts to identify uses of removed APIs Two alerts fire when an API is in use that will be removed in the release: APIRemovedInNextReleaseInUse - for APIs that will be removed in the OpenShift Container Platform release. APIRemovedInNextEUSReleaseInUse - for APIs that will be removed in the OpenShift Container Platform Extended Update Support (EUS) release. If either of these alerts are firing in your cluster, review the alerts and take action to clear the alerts by migrating manifests and API clients to use the new API version. Use the APIRequestCount API to get more information about which APIs are in use and which workloads are using removed APIs, because the alerts do not provide this information. Additionally, some APIs might not trigger these alerts but are still captured by APIRequestCount . The alerts are tuned to be less sensitive to avoid alerting fatigue in production systems. 5.2.2.2. Using APIRequestCount to identify uses of removed APIs You can use the APIRequestCount API to track API requests and review whether any of them are using one of the removed APIs. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command and examine the REMOVEDINRELEASE column of the output to identify the removed APIs that are currently in use: USD oc get apirequestcounts Example output NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H ... flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 1.26 0 16 flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 101 857 groups.v1.user.openshift.io 22 201 hardwaredata.v1alpha1.metal3.io 3 33 helmchartrepositories.v1beta1.helm.openshift.io 142 628 horizontalpodautoscalers.v2.autoscaling 11 103 horizontalpodautoscalers.v2beta2.autoscaling 1.26 0 15 ... Important You can safely ignore the following entries that appear in the results: The system:serviceaccount:kube-system:generic-garbage-collector and the system:serviceaccount:kube-system:namespace-controller users might appear in the results because these services invoke all registered APIs when searching for resources to remove. The system:kube-controller-manager and system:cluster-policy-controller users might appear in the results because they walk through all resources while enforcing various policies. You can also use -o jsonpath to filter the results: USD oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.metadata.name}{"\n"}{end}' Example output 1.26 flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 1.26 horizontalpodautoscalers.v2beta2.autoscaling 5.2.2.3. Using APIRequestCount to identify which workloads are using the removed APIs You can examine the APIRequestCount resource for a given API version to help identify which workloads are using the API. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command and examine the username and userAgent fields to help identify the workloads that are using the API: USD oc get apirequestcounts <resource>.<version>.<group> -o yaml For example: USD oc get apirequestcounts flowschemas.v1beta1.flowcontrol.apiserver.k8s.io -o yaml You can also use -o jsonpath to extract the username and userAgent values from an APIRequestCount resource: USD oc get apirequestcounts flowschemas.v1beta1.flowcontrol.apiserver.k8s.io \ -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{","}{.username}{","}{.userAgent}{"\n"}{end}' \ | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT Example output VERBS USERNAME USERAGENT get system:serviceaccount:openshift-cluster-version:default cluster-version-operator/v0.0.0 watch system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa oauth-apiserver/v0.0.0 5.2.3. Migrating instances of removed APIs For information about how to migrate removed Kubernetes APIs, see the Deprecated API Migration Guide in the Kubernetes documentation. 5.2.4. Providing the administrator acknowledgment After you have evaluated your cluster for any removed APIs and have migrated any removed APIs, you can acknowledge that your cluster is ready to upgrade from OpenShift Container Platform 4.12 to 4.13. Warning Be aware that all responsibility falls on the administrator to ensure that all uses of removed APIs have been resolved and migrated as necessary before providing this administrator acknowledgment. OpenShift Container Platform can assist with the evaluation, but cannot identify all possible uses of removed APIs, especially idle workloads or external tools. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command to acknowledge that you have completed the evaluation and your cluster is ready for the Kubernetes API removals in OpenShift Container Platform 4.13: USD oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.12-kube-1.26-api-removals-in-4.13":"true"}}' --type=merge 5.3. Assessing the risk of conditional updates A conditional update is an update target that is available but not recommended due to a known risk that applies to your cluster. The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update recommendations, and some potential update targets might have risks associated with them. The CVO evaluates the conditional risks, and if the risks are not applicable to the cluster, then the target version is available as a recommended update path for the cluster. If the risk is determined to be applicable, or if for some reason CVO cannot evaluate the risk, then the update target is available to the cluster as a conditional update. When you encounter a conditional update while you are trying to update to a target version, you must assess the risk of updating your cluster to that version. Generally, if you do not have a specific need to update to that target version, it is best to wait for a recommended update path from Red Hat. However, if you have a strong reason to update to that version, for example, if you need to fix an important CVE, then the benefit of fixing the CVE might outweigh the risk of the update being problematic for your cluster. You can complete the following tasks to determine whether you agree with the Red Hat assessment of the update risk: Complete extensive testing in a non-production environment to the extent that you are comfortable completing the update in your production environment. Follow the links provided in the conditional update description, investigate the bug, and determine if it is likely to cause issues for your cluster. If you need help understanding the risk, contact Red Hat Support. Additional resources Evaluation of update availability 5.4. Best practices for cluster updates OpenShift Container Platform provides a robust update experience that minimizes workload disruptions during an update. Updates will not begin unless the cluster is in an upgradeable state at the time of the update request. This design enforces some key conditions before initiating an update, but there are a number of actions you can take to increase your chances of a successful cluster update. 5.4.1. Choose versions recommended by the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations based on cluster characteristics such as the cluster's subscribed channel. The Cluster Version Operator saves these recommendations as either recommended or conditional updates. While it is possible to attempt an update to a version that is not recommended by OSUS, following a recommended update path protects users from encountering known issues or unintended consequences on the cluster. Choose only update targets that are recommended by OSUS to ensure a successful update. 5.4.2. Address all critical alerts on the cluster Critical alerts must always be addressed as soon as possible, but it is especially important to address these alerts and resolve any problems before initiating a cluster update. Failing to address critical alerts before beginning an update can cause problematic conditions for the cluster. In the Administrator perspective of the web console, navigate to Observe Alerting to find critical alerts. 5.4.3. Ensure that the cluster is in an Upgradable state When one or more Operators have not reported their Upgradeable condition as True for more than an hour, the ClusterNotUpgradeable warning alert is triggered in the cluster. In most cases this alert does not block patch updates, but you cannot perform a minor version update until you resolve this alert and all Operators report Upgradeable as True . For more information about the Upgradeable condition, see "Understanding cluster Operator condition types" in the additional resources section. 5.4.4. Ensure that enough spare nodes are available A cluster should not be running with little to no spare node capacity, especially when initiating a cluster update. Nodes that are not running and available may limit a cluster's ability to perform an update with minimal disruption to cluster workloads. Depending on the configured value of the cluster's maxUnavailable spec, the cluster might not be able to apply machine configuration changes to nodes if there is an unavailable node. Additionally, if compute nodes do not have enough spare capacity, workloads might not be able to temporarily shift to another node while the first node is taken offline for an update. Make sure that you have enough available nodes in each worker pool, as well as enough spare capacity on your compute nodes, to increase the chance of successful node updates. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. 5.4.5. Ensure that the cluster's PodDisruptionBudget is properly configured You can use the PodDisruptionBudget object to define the minimum number or percentage of pod replicas that must be available at any given time. This configuration protects workloads from disruptions during maintenance tasks such as cluster updates. However, it is possible to configure the PodDisruptionBudget for a given topology in a way that prevents nodes from being drained and updated during a cluster update. When planning a cluster update, check the configuration of the PodDisruptionBudget object for the following factors: For highly available workloads, make sure there are replicas that can be temporarily taken offline without being prohibited by the PodDisruptionBudget . For workloads that aren't highly available, make sure they are either not protected by a PodDisruptionBudget or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination. Additional resources Understanding cluster Operator condition types
[ "oc get apirequestcounts", "NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 1.26 0 16 flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 101 857 groups.v1.user.openshift.io 22 201 hardwaredata.v1alpha1.metal3.io 3 33 helmchartrepositories.v1beta1.helm.openshift.io 142 628 horizontalpodautoscalers.v2.autoscaling 11 103 horizontalpodautoscalers.v2beta2.autoscaling 1.26 0 15", "oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!=\"\")]}{.status.removedInRelease}{\"\\t\"}{.metadata.name}{\"\\n\"}{end}'", "1.26 flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 1.26 horizontalpodautoscalers.v2beta2.autoscaling", "oc get apirequestcounts <resource>.<version>.<group> -o yaml", "oc get apirequestcounts flowschemas.v1beta1.flowcontrol.apiserver.k8s.io -o yaml", "oc get apirequestcounts flowschemas.v1beta1.flowcontrol.apiserver.k8s.io -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{\",\"}{.username}{\",\"}{.userAgent}{\"\\n\"}{end}' | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT", "VERBS USERNAME USERAGENT get system:serviceaccount:openshift-cluster-version:default cluster-version-operator/v0.0.0 watch system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa oauth-apiserver/v0.0.0", "oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.12-kube-1.26-api-removals-in-4.13\":\"true\"}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/updating_clusters/updating-cluster-prepare
Chapter 3. LVM Administration Overview
Chapter 3. LVM Administration Overview This chapter provides an overview of the administrative procedures you use to configure LVM logical volumes. This chapter is intended to provide a general understanding of the steps involved. For specific step-by-step examples of common LVM configuration procedures, see Chapter 5, LVM Configuration Examples . For descriptions of the CLI commands you can use to perform LVM administration, see Chapter 4, LVM Administration with CLI Commands . 3.1. Logical Volume Creation Overview The following is a summary of the steps to perform to create an LVM logical volume. Initialize the partitions you will use for the LVM volume as physical volumes (this labels them). Create a volume group. Create a logical volume. After creating the logical volume you can create and mount the file system. The examples in this document use GFS2 file systems. Create a GFS2 file system on the logical volume with the mkfs.gfs2 command. Create a new mount point with the mkdir command. In a clustered system, create the mount point on all nodes in the cluster. Mount the file system. You may want to add a line to the fstab file for each node in the system. Note Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the Red Hat Enterprise Linux 7 release Red Hat does not support the use of GFS2 as a single-node file system. Red Hat will continue to support single-node GFS2 file systems for mounting snapshots of cluster file systems (for example, for backup purposes). Creating the LVM volume is machine independent, since the storage area for LVM setup information is on the physical volumes and not the machine where the volume was created. Servers that use the storage have local copies, but can recreate that from what is on the physical volumes. You can attach physical volumes to a different server if the LVM versions are compatible.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_administration
Chapter 4. User-managed encryption for IBM Cloud
Chapter 4. User-managed encryption for IBM Cloud By default, provider-managed encryption is used to secure the following when you deploy an OpenShift Container Platform cluster: The root (boot) volume of control plane and compute machines Persistent volumes (data volumes) that are provisioned after the cluster is deployed You can override the default behavior by specifying an IBM(R) Key Protect for IBM Cloud(R) (Key Protect) root key as part of the installation process. When you bring our own root key, you modify the installation configuration file ( install-config.yaml ) to specify the Cloud Resource Name (CRN) of the root key by using the encryptionKey parameter. You can specify that: The same root key be used be used for all cluster machines. You do so by specifying the key as part of the cluster's default machine configuration. When specified as part of the default machine configuration, all managed storage classes are updated with this key. As such, data volumes that are provisioned after the installation are also encrypted using this key. Separate root keys be used for the control plane and compute machine pools. For more information about the encryptionKey parameter, see Additional IBM Cloud configuration parameters . Note Make sure you have integrated Key Protect with your IBM Cloud Block Storage service. For more information, see the Key Protect documentation . 4.1. steps Install an OpenShift Container Platform cluster: Installing a cluster on IBM Cloud with customizations Installing a cluster on IBM Cloud with network customizations Installing a cluster on IBM Cloud into an existing VPC Installing a private cluster on IBM Cloud
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_cloud/user-managed-encryption-ibm-cloud
probe::netdev.hard_transmit
probe::netdev.hard_transmit Name probe::netdev.hard_transmit - Called when the devices is going to TX (hard) Synopsis netdev.hard_transmit Values truesize The size of the data to be transmitted. dev_name The device scheduled to transmit protocol The protocol used in the transmission length The length of the transmit buffer.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netdev-hard-transmit
Chapter 6. Configuring guest access with RBAC UI
Chapter 6. Configuring guest access with RBAC UI Use guest access with the role-based access control (RBAC) front-end plugin to allow a user to test role and policy creation without the need to set up and configure an authentication provider. Note Guest access is not recommended for production. 6.1. Configuring the RBAC backend plugin You can configure the RBAC backend plugin by updating the app-config.yaml file to enable the permission framework. Prerequisites You have installed the @janus-idp/backstage-plugin-rbac plugin in Developer Hub. For more information, see Configuring dynamic plugins . Procedure Update the app-config.yaml file to enable the permission framework as shown: permission enabled: true rbac: admin: users: - name: user:default/guest pluginsWithPermission: - catalog - permission - scaffolder Note The pluginsWithPermission section of the app-config.yaml section includes only three plugins by default. Update the section as needed to include any additional plugins that also incorporate permissions. 6.2. Setting up the guest authentication provider You can enable guest authentication and use it alongside the RBAC frontend plugin. Prerequisites You have installed the @janus-idp/backstage-plugin-rbac plugin in Developer Hub. For more information, see Configuring dynamic plugins . Procedure In the app-config.yaml file, add the user entity reference to resolve and enable the dangerouslyAllowOutsideDevelopment option, as shown in the following example: auth: environment: development providers: guest: userEntityRef: user:default/guest dangerouslyAllowOutsideDevelopment: true Note You can use user:default/guest as the user entity reference to match the added user under the permission.rbac.admin.users section of the app-config.yaml file.
[ "permission enabled: true rbac: admin: users: - name: user:default/guest pluginsWithPermission: - catalog - permission - scaffolder", "auth: environment: development providers: guest: userEntityRef: user:default/guest dangerouslyAllowOutsideDevelopment: true" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/authorization/configuring-guest-access-with-rbac-ui_title-authorization
Chapter 4. New and changed features
Chapter 4. New and changed features AMQ Interconnect 1.10 includes the following changes: Red Hat Enterprise Linux 6 is no longer supported with this release.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/release_notes_for_amq_interconnect_1.10/new_and_changed_features
Chapter 14. Security
Chapter 14. Security LUKS-encrypted removable storage devices can be now automatically unlocked using NBDE With this update, the clevis package and the clevis_udisks2 subpackage enable users to bind removable volumes to a Network-Bound Disk Encryption (NBDE) policy. To automatically unlock a LUKS-encrypted removable storage device, such as a USB drive, use the clevis luks bind and clevis luks unlock commands. (BZ#1475408) new package: clevis-systemd This update of the Clevis pluggable framework introduces the clevis-systemd subpackage, which enables administrators to set automated unlocking of LUKS-encrypted non-root volumes at boot time. (BZ#1475406) OpenSCAP can be now integrated into Ansible workflows With this update, the OpenSCAP scanner can generate remediation scripts in the form of Ansible Playbooks, either based on profiles or based on scan results. Playbooks based on SCAP Security Guide Profiles contain fixes for all rules, and playbooks based on scan results contain only fixes for rules that fail during an evaluation. The user can also generate a playbook from a tailored Profile, or customize it directly by editing the values in the playbook. Tags, such as Rule ID, strategy, complexity, disruption, or references, used as metadata for tasks in playbooks serve to filter, which tasks to apply. (BZ# 1404429 ) SECCOMP_FILTER_FLAG_TSYNC enables synchronization of calling process threads This update introduces the SECCOMP_FILTER_FLAG_TSYNC flag. When adding a new filter, this flag synchronizes all other threads of the calling process to the same seccomp filter tree. See the seccomp(2) man page for more information. Note that if an application installs multiple libseccomp or seccomp-bpf filters, the seccomp() syscall should be added to the list of allowed system calls. (BZ#1458278) nss rebased to version 3.34 The nss packages have been upgraded to upstream version 3.34, which provides a number of bug fixes and enhancements over the version. Notable changes include: TLS compression is no longer supported. The TLS server code now supports session ticket without an RSA key. Certificates can be specified using a PKCS#11 URI. The RSA-PSS cryptographic signature scheme is now allowed for signing and verification of certificate signatures. (BZ#1457789) SSLv3 disabled in mod_ssl To improve the security of SSL/TLS connections, the default configuration of the httpd mod_ssl module has been changed to disable support for the SSLv3 protocol, and to restrict the use of certain cryptographic cipher suites. This change will affect only fresh installations of the mod_ssl package, so existing users should manually change the SSL configuration as required. Any SSL clients attempting to establish connections using SSLv3 , or using a cipher suite based on DES or RC4 , will be denied in the new default configuration. To allow such insecure connections, modify the SSLProtocol and SSLCipherSuite directives in the /etc/httpd/conf.d/ssl.conf file. (BZ# 1274890 ) Libreswan now supports split-DNS configuration for IKEv2 This update of the libreswan packages introduces support for split-DNS configuration for the Internet Key Exchange version 2 (IKEv2) protocol through the leftmodecfgdns= and leftcfgdomains= options. This enables the user to reconfigure a locally running DNS server with DNS forwarding for specific private domains. (BZ# 1300763 ) libreswan now supports AES-GMAC for ESP With this update, support for Advanced Encryption Standard (AES) Galois Message Authentication Code (GMAC) within IPsec Encapsulating Security Payload (ESP) through the phase2alg=null_auth_aes_gmac option has been added to the libreswan packages. (BZ#1475434) openssl-ibmca rebased to 1.4.0 The openssl-ibmca packages have been upgraded to upstream version 1.4.0, which provides a number of bug fixes and enhancements over the version: Added Advanced Encryption Standard Galois/Counter Mode (AES-GCM) support. Fixes for OpenSSL operating in FIPS mode incorporated. (BZ#1456516) opencryptoki rebased to 3.7.0 The opencryptoki packages have been upgraded to upstream version 3.7.0, which provides a number of bug fixes and enhancements over the version: Upgraded the license to Common Public License Version 1.0 (CPL). Added ECDSA with SHA-2 support for Enterprise PKCS #11 (EP11) and Common Cryptographic Architecture (CCA). Improved performance by moving from mutex locks to Transactional Memory (TM). (BZ#1456520) atomic scan with configuration_compliance enables creating security-compliant container images at build time The rhel7/openscap container image now provides the configuration_compliance scan type. When used as an argument for the atomic scan command, this new scan type enables users to: scan Red Hat Enterprise Linux-based container images and containers against any profile provided by the SCAP Security Guide (SSG) remediate Red Hat Enterprise Linux-based container images to be compliant with any profile provided by the SSG generate an HTML report from a scan or a remediation. The remediation results in a container image with an altered configuration that is added as a new layer on top of the original container image. Note that the original container image remains unchanged and only a new layer is created on top of it. The remediation process builds a new container image that contains all the configuration improvements. The content of this layer is defined by the security policy of scanning. This also means that the remediated container image is no longer signed by Red Hat, which is expected, since it differs from the original container image by containing the remediated layer. (BZ# 1472499 ) tang-nagios enables Nagios to monitor Tang The tang-nagios subpackage provides the Nagios plugin for Tang . The plugin enables the Nagios program to monitor a Tang server. The subpackage is available in the Optional channel. See the tang-nagios(1) man page for more information. (BZ# 1478895 ) clevis now logs privileged operations With this update, the clevis-udisks2 subpackage logs all attempted key recoveries to the Audit log, and the privileged operations can be now tracked using the Linux Audit system. (BZ# 1478888 ) PK11_CreateManagedGenericObject() has been added to NSS to prevent memory leaks in applications The PK11_DestroyGenericObject() function does not destroy objects allocated by PK11_CreateGenericObject() properly, but some applications depend on a function for creating objects that persist after the use of the object. For this reason, the Network Security Services (NSS) libraries now include the PK11_CreateManagedGenericObject() function. If you create objects with PK11_CreateManagedGenericObject() , the PK11_DestroyGenericObject() function also properly destroys underlying associated objects. Applications, such as the curl utility, can now use PK11_CreateManagedGenericObject() to prevent memory leaks. (BZ# 1395803 ) OpenSSH now supports openssl-ibmca and openssl-ibmpkcs11 HSMs With this update, the OpenSSH suite enables hardware security modules (HSM) handled by the openssl-ibmca and openssl-ibmpkcs11 packages. Prior to this, the OpenSSH seccomp filter prevented these cards working with the OpenSSH privilege separation. The seccomp filter has been updated to allow system calls needed by the cryptographic cards on IBM Z. (BZ#1478035) cgroup_seclabel enables fine-grained access control on cgroups This update introduces the cgroup_seclabel policy capability that enables users to set labels on control group (cgroup) files. Prior to this addition, labeling of the cgroup file system was not possible, and to run the systemd service manager in a container, read and write permissions for any content on the cgroup file system had to be allowed. The cgroup_seclabel policy capability enables fine-grained access control on the cgroup file system. (BZ#1494179) The boot process can now unlock encrypted devices connected by network Previously, the boot process attempted to unlock block devices connected by network before starting network services. Because the network was not activated, it was not possible to connect and decrypt these devices. With this update, the remote-cryptsetup.target unit and other patches have been added to systemd packages. As a result, it is now possible to unlock encrypted block devices that are connected by network during system boot and to mount file systems on such block devices. To ensure correct ordering between services during system boot, you must mark the network device with the _netdev option in the /etc/crypttab configuration file. A common use case for this feature is together with network-bound disk encryption. For more information on network-bound disk encryption, see the following chapter in the Red Hat Enterprise Linux Security Guide: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-policy-based_decryption#sec-Using_Network-Bound_Disk_Encryption (BZ# 1384014 ) SELinux now supports InfiniBand object labeling This release introduces SELinux support for InfiniBand end port and P_Key labeling, including enhancements to the kernel, policy, and the semanage tool. To manage InfiniBand -related labels, use the following commands: semanage ibendport semanage ibpkey (BZ# 1471809 , BZ# 1464484 , BZ#1464478) libica rebased to 3.2.0 The libica packages have been upgraded to upstream version 3.2.0, which most notably adds support for the Enhanced SIMD instructions set. (BZ#1376836) SELinux now supports systemd No New Privileges This update introduces the nnp_nosuid_transition policy capability that enables SELinux domain transitions under No New Privileges (NNP) or nosuid if nnp_nosuid_transition is allowed between the old and new contexts. The selinux-policy packages now contain a policy for systemd services that use the NNP security feature. The following rule describes allowing this capability for a service: For example: The distribution policy now also contains the m4 macro interface, which can be used in SELinux security policies for services that use the init_nnp_daemon_domain() function. (BZ# 1480518 ) Libreswan rebased to version 3.23 The libreswan packages have been upgraded to upstream version 3.23, which provides a number of bug fixes, speed improvements, and enhancements over the version. Notable changes include: Support for the extended DNS Security Extensions (DNSSEC) suite through the dnssec-enable=yes|no , dnssec-rootkey-file= , and dnssec-anchors= options. Experimental support for Postquantum Preshared Keys (PPK) through the ppk=yes|no|insist option. Support for Signature Authentication (RFC 7427) for RSA-SHA. The new logip= option with the default value yes can be used to disable logging of incoming IP addresses. This is useful for large-scale service providers concerned for privacy. Unbound DNS server ipsecmod module support for Opportunistic IPsec using IPSECKEY records in DNS. Support for the Differentiated Services Code Point (DSCP) architecture through the decap-dscp=yes option. DSCP was formerly known as Terms Of Service (TOS). Support for disabling Path MTU Discovery (PMTUD) through the nopmtudisc=yes option. Support for the IDr (Identification - Responder) payload for improved multi-domain deployments. Resending IKE packets on extremely busy servers that return the EAGAIN error message. Various improvements to the updown scripts for customizations. Updated preferences of crypto algorithms as per RFC 8221 and RFC 8247. Added the %none and /dev/null values to the leftupdown= option for disabling the updown script. Improved support for rekeying using the CREATE_CHILD_SA exchange. IKEv1 XAUTH thread race conditions resolved. Significant performance increase due to optimized pthread locking. See the ipsec.conf man page for more information. (BZ# 1457904 ) libreswan now supports IKEv2 MOBIKE This update introduces support for the IKEv2 Mobility and Multihoming (MOBIKE) protocol (RFC 4555) using the XFRM_MIGRATE mechanism through the mobike=yes|no option. MOBIKE enables seamless switching of networks, for example, Wi-Fi, LTE, and so on, without disturbing the IPsec tunnel. (BZ# 1471763 ) scap-workbench rebased to version 1.1.6 The scap-workbench packages have been upgraded to version 1.1.6, which provides a number of bug fixes and enhancements over the version. Notable changes are: Added support for generating Bash and Ansible remediation roles from profiles and for scanning results. The generated remediations can be saved to a file for later use. Added support for opening tailoring files directly from the command line. Fixed a short integer overflow when using SSH port numbers higher than 32,768. (BZ# 1479036 ) OpenSCAP is now able to generate results for DISA STIG Viewer The OpenSCAP suite is now able to generate results in the format compatible with the DISA STIG Viewer tool. This enables the user to scan a local system for Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) compliance and open results in DISA STIG Viewer . (BZ# 1505517 ) selinux-policy no longer contains permissive domains As a security hardening measure, the SELinux policy now does not set the following domains to permissive mode by default: blkmapd_t hsqldb_t ipmievd_t sanlk_resetd_t systemd_hwdb_t targetd_t The default mode for these domains is now set to enforcing. (BZ# 1494172 ) audit rebased to version 2.8.1 The audit packages have been upgraded to upstream version 2.8.1, which provides a number of bug fixes and enhancements over the version. Notable changes are: Added support for ambient capability fields. The Audit daemon now works also on IPv6. Added the default port to the auditd.conf file. Fixed the auvirt tool to report Access Vector Cache (AVC) messages. (BZ# 1476406 ) OpenSC now supports the SCE7.0 144KDI CAC Alt. tokens This update adds support for the SCE7.0 144KDI Common Access Card (CAC) Alternate tokens. These new cards were not compliant with the U.S. Department of Defense (DoD) Implementation Guide for CAC PIV End-Point specification, and the OpenSC driver has been updated to reflect the updated specification. (BZ# 1473418 )
[ "allow source_domain target_type:process2 { nnp_transition nosuid_transition };", "allow init_t fprintd_t:process2 { nnp_transition nosuid_transition };" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_security
Chapter 9. Installing a cluster on AWS into a government region
Chapter 9. Installing a cluster on AWS into a government region In OpenShift Container Platform version 4.15, you can install a cluster on Amazon Web Services (AWS) into a government region. To configure the region, modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 9.2. AWS government regions OpenShift Container Platform supports deploying a cluster to an AWS GovCloud (US) region. The following AWS GovCloud partitions are supported: us-gov-east-1 us-gov-west-1 9.3. Installation requirements Before you can install the cluster, you must: Provide an existing private AWS VPC and subnets to host the cluster. Public zones are not supported in Route 53 in AWS GovCloud. As a result, clusters must be private when you deploy to an AWS government region. Manually create the installation configuration file ( install-config.yaml ). 9.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS GovCloud Region. Therefore, clusters must be private if they are deployed to an AWS GovCloud Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 9.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 9.5. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 9.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 9.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 9.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 9.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 9.5.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 9.6. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.8. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 9.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 9.10. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 9.10.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.10.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 9.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 9.10.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 9.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 9.10.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{"auths": ...}' 23 1 12 14 23 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 9.10.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 9.10.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 9.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.12. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Incorporating the Cloud Credential Operator utility manifests . 9.12.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 9.12.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 9.12.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 9.3. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 9.4. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 9.12.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 9.12.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 9.12.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 9.12.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 9.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 9.15. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 9.17. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/installing-aws-government-region