title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Appendix C. Revision History | Appendix C. Revision History 0.3-4 Fri June 28, Lenka Spackova ( [email protected] ) Updated a link to the Converting from a Linux distribution to RHEL using the Convert2RHEL utility guide (Overview). 0.3-3 Fri Apr 28 2023, Lucie Varakova ( [email protected] ) Added a known issue JIRA:RHELPLAN-155168 (Authentication and Interoperability). 0.3-2 Wed Oct 19 2022, Lenka Spackova ( [email protected] ) Added information on how to configure unbound to run inside chroot , BZ#2121623 (Networking). 0.3-1 Wed Sep 21 2022, Lenka Spackova ( [email protected] ) Added two new ehnancements, BZ#1967950 and BZ#1993822 (Security). 0.3-0 Fri Apr 22 2022, Lenka Spackova ( [email protected] ) Added two deprecated packages to Deprecated Functionality . 0.2-9 Thu Feb 17 2022, Lenka Spackova ( [email protected] ) Added two notes related to supportability to Deprecated Functionality . 0.2-8 Tue Feb 08 2022, Lenka Spackova ( [email protected] ) Added information about the hidepid=n mount option not being recommended in RHEL 7 to Deprecated Functionality . 0.2-7 Wed Jan 26 2022, Lenka Spackova ( [email protected] ) Added a known issue BZ#2042313 (System and Subscription Management). 0.2-6 Tue Dec 07 2021, Lenka Spackova ( [email protected] ) Added a bug fix BZ#1942281 (Security). Changed a known issue to a bug fix BZ#1976123 (Security). 0.2-5 Tue Aug 17 2021, Lenka Spackova ( [email protected] ) Updated the Red Hat Software Collections section. 0.2-4 Wed Jul 21 2021, Lenka Spackova ( [email protected] ) Added enhancements BZ#1958789 and BZ#1955180 (Security). 0.2-3 Mon Jul 12 2021, Lenka Spackova ( [email protected] ) Added a known issue BZ#1976123 (Security). 0.2-2 Thu Jun 03 2021, Lenka Spackova ( [email protected] ) Added a known issue BZ#1933998 (Kernel). Added a bug fix BZ#1890111 (Security). 0.2-1 Fri May 21 2021, Lenka Spackova ( [email protected] ) Updated information about OS conversion in Overview . 0.2-0 Wed Apr 28 2020, Lenka Spackova ( [email protected] ) Added a bug fix BZ#1891435 (Security). 0.1-9 Mon Apr 26 2020, Lenka Spackova ( [email protected] ) Added a known issue BZ#1942865 (Storage). 0.1-8 Tue Apr 06 2021, Lenka Spackova ( [email protected] ) Improved the list of supported architectures. 0.1-7 Wed Mar 31 2021, Lenka Spackova ( [email protected] ) Updated information about OS conversions with the availability of the supported Convert2RHEL utility. 0.1-6 Tue Mar 30 2021, Lenka Spackova ( [email protected] ) Added a known issue (Kernel). 0.1-5 Tue Mar 02 2021, Lenka Spackova ( [email protected] ) Updated a link to Upgrading from RHEL 6 to RHEL 7 . Fixed CentOS Linux name. 0.1-4 Wed Feb 03 2021, Lenka Spackova ( [email protected] ) Added a note about deprecated parameters for the network configuration in the kernel command line. 0.1-3 Tue Feb 02 2021, Lenka Spackova ( [email protected] ) Added a retirement notice for Red Hat Enterprise Linux Atomic Host . 0.1-2 Thu Jan 28 2021, Lenka Spackova ( [email protected] ) Added a note related to the new page_owner kernel parameter. 0.1-1 Tue Jan 19 2021, Lenka Spackova ( [email protected] ) Updated deprecated packages. 0.1-0 Wed Dec 16 2020, Lenka Spackova ( [email protected] ) Added mthca to deprecated drivers. 0.0-9 Tue Dec 15 2020, Lenka Spackova ( [email protected] ) Added information about the STIG security profile update (Security). 0.0-8 Wed Nov 25 2020, Lenka Spackova ( [email protected] ) Added a known issue (Security). 0.0-7 Wed Nov 11 2020, Lenka Spackova ( [email protected] ) Added a known issue (RHEL in cloud environments). 0.0-6 Tue Oct 13 2020, Lenka Spackova ( [email protected] ) Updated deprecated adapters. Fixed a driver name in a Technology Preview note ( iavf ). 0.0-5 Tue Sep 29 2020, Lenka Spackova ( [email protected] ) Release of the Red Hat Enterprise Linux 7.9 Release Notes. 0.0-4 Mon Sep 7 2020, Jaroslav Klech ( [email protected] ) Provided the correct expansion of BERT in the kernel parameters section. 0.0-3 Thu Jun 25 2020, Lenka Spackova ( [email protected] ) Added a known issue related to OpenLDAP libraries (Servers and Services). 0.0-2 Tue Jun 23 2020, Jaroslav Klech ( [email protected] ) Added and granulated the kernel parameters chapter. Added the device drivers chapter. 0.0-1 Thu Jun 18 2020, Lenka Spackova ( [email protected] ) Various additions. 0.0-0 Wed May 20 2020, Lenka Spackova ( [email protected] ) Release of the Red Hat Enterprise Linux 7.9 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/revision_history |
CLI tools | CLI tools OpenShift Container Platform 4.17 Learning how to use the command-line tools for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/index |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/pr01 |
Chapter 1. Backup and restore | Chapter 1. Backup and restore 1.1. Control plane backup and restore operations As a cluster administrator, you might need to stop an OpenShift Container Platform cluster for a period and restart it later. Some reasons for restarting a cluster are that you need to perform maintenance on a cluster or want to reduce resource costs. In OpenShift Container Platform, you can perform a graceful shutdown of a cluster so that you can easily restart the cluster later. You must back up etcd data before shutting down a cluster; etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In OpenShift Container Platform, you can also replace an unhealthy etcd member . When you want to get your cluster running again, restart the cluster gracefully . Note A cluster's certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still approve the certificate signing requests (CSRs) . You might run into several situations where OpenShift Container Platform does not work as expected, such as: You have a cluster that is not functional after the restart because of unexpected conditions, such as node failure or network connectivity issues. You have deleted something critical in the cluster by mistake. You have lost the majority of your control plane hosts, leading to etcd quorum loss. You can always recover from a disaster situation by restoring your cluster to its state using the saved etcd snapshots. Additional resources Quorum protection with machine lifecycle hooks 1.2. Application backup and restore operations As a cluster administrator, you can back up and restore applications running on OpenShift Container Platform by using the OpenShift API for Data Protection (OADP). OADP backs up and restores Kubernetes resources and internal images, at the granularity of a namespace, by using the version of Velero that is appropriate for the version of OADP you install, according to the table in Downloading the Velero CLI tool . OADP backs up and restores persistent volumes (PVs) by using snapshots or Restic. For details, see OADP features . 1.2.1. OADP requirements OADP has the following requirements: You must be logged in as a user with a cluster-admin role. You must have object storage for storing backups, such as one of the following storage types: OpenShift Data Foundation Amazon Web Services Microsoft Azure Google Cloud Platform S3-compatible object storage IBM Cloud(R) Object Storage S3 Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To back up PVs with snapshots, you must have cloud storage that has a native snapshot API or supports Container Storage Interface (CSI) snapshots, such as the following providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud storage, such as Ceph RBD or Ceph FS Note If you do not want to back up PVs by using snapshots, you can use Restic , which is installed by the OADP Operator by default. 1.2.2. Backing up and restoring applications You back up applications by creating a Backup custom resource (CR). See Creating a Backup CR . You can configure the following backup options: Creating backup hooks to run commands before or after the backup operation Scheduling backups Backing up applications with File System Backup: Kopia or Restic You restore application backups by creating a Restore (CR). See Creating a Restore CR . You can configure restore hooks to run commands in init containers or in the application container during the restore operation. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/backup_and_restore/backup-restore-overview |
Chapter 43. Managing hosts using Ansible playbooks | Chapter 43. Managing hosts using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for Identity Management (IdM), and you can use Ansible modules to automate host management. The following concepts and operations are performed when managing hosts and host entries using Ansible playbooks: Ensuring the presence of IdM host entries that are only defined by their FQDNs Ensuring the presence of IdM host entries with IP addresses Ensuring the presence of multiple IdM host entries with random passwords Ensuring the presence of an IdM host entry with multiple IP addresses Ensuring the absence of IdM host entries 43.1. Ensuring the presence of an IdM host entry with FQDN using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are only defined by their fully-qualified domain names (FQDNs). Specifying the FQDN name of the host is enough if at least one of the following conditions applies: The IdM server is not configured to manage DNS. The host does not have a static IP address or the IP address is not known at the time the host is configured. Adding a host defined only by an FQDN essentially creates a placeholder entry in the IdM DNS service. For example, laptops may be preconfigured as IdM clients, but they do not have IP addresses at the time they are configured. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the FQDN of the host whose presence in IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/add-host.yml file: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. 43.2. Ensuring the presence of an IdM host entry with DNS information using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are defined by their fully-qualified domain names (FQDNs) and their IP addresses. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. In addition, if the IdM server is configured to manage DNS and you know the IP address of the host, specify a value for the ip_address parameter. The IP address is necessary for the host to exist in the DNS resource records. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-present.yml file. You can also include other, additional information: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms host01.idm.example.com exists in IdM. 43.3. Ensuring the presence of multiple IdM host entries with random passwords using Ansible playbooks The ipahost module allows the system administrator to ensure the presence or absence of multiple host entries in IdM using just one Ansible task. Follow this procedure to ensure the presence of multiple host entries that are only defined by their fully-qualified domain names (FQDNs). Running the Ansible playbook generates random passwords for the hosts. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the hosts whose presence in IdM you want to ensure. To make the Ansible playbook generate a random password for each host even when the host already exists in IdM and update_password is limited to on_create , add the random: true and force: true options. To simplify this step, you can copy and modify the example from the /usr/share/doc/ansible-freeipa/README-host.md Markdown file: Run the playbook: Note To deploy the hosts as IdM clients using random, one-time passwords (OTPs), see Authorization options for IdM client enrollment using an Ansible playbook or Installing a client by using a one-time password: Interactive installation . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of one of the hosts: The output confirms host01.idm.example.com exists in IdM with a random password. 43.4. Ensuring the presence of an IdM host entry with multiple IP addresses using Ansible playbooks Follow this procedure to ensure the presence of a host entry in Identity Management (IdM) using Ansible playbooks. The host entry is defined by its fully-qualified domain name (FQDN) and its multiple IP addresses. Note In contrast to the ipa host utility, the Ansible ipahost module can ensure the presence or absence of several IPv4 and IPv6 addresses for a host. The ipa host-mod command cannot handle IP addresses. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file. Specify, as the name of the ipahost variable, the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. Specify each of the multiple IPv4 and IPv6 ip_address values on a separate line by using the ip_address syntax. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-member-ipaddresses-present.yml file. You can also include additional information: Run the playbook: Note The procedure creates a host entry in the IdM LDAP server but does not enroll the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. To verify that the multiple IP addresses of the host exist in the IdM DNS records, enter the ipa dnsrecord-show command and specify the following information: The name of the IdM domain The name of the host The output confirms that all the IPv4 and IPv6 addresses specified in the playbook are correctly associated with the host01.idm.example.com host entry. 43.5. Ensuring the absence of an IdM host entry using Ansible playbooks Follow this procedure to ensure the absence of host entries in Identity Management (IdM) using Ansible playbooks. Prerequisites IdM administrator credentials Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose absence from IdM you want to ensure. If your IdM domain has integrated DNS, use the updatedns: true option to remove the associated records of any kind for the host from the DNS. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/delete-host.yml file: Run the playbook: Note The procedure results in: The host not being present in the IdM Kerberos realm. The host entry not being present in the IdM LDAP server. To remove the specific IdM configuration of system services, such as System Security Services Daemon (SSSD), from the client host itself, you must run the ipa-client-install --uninstall command on the client. For details, see Uninstalling an IdM client . Verification Log into ipaserver as admin: Display information about host01.idm.example.com : The output confirms that the host does not exist in IdM. 43.6. Additional resources See the /usr/share/doc/ansible-freeipa/README-host.md Markdown file. See the additional playbooks in the /usr/share/doc/ansible-freeipa/playbooks/host directory. | [
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com state: present force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host01.idm.example.com is present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com description: Example host ip_address: 192.168.0.123 locality: Lab ns_host_location: Lab ns_os_version: CentOS 7 ns_hardware_platform: Lenovo T61 mac_address: - \"08:00:27:E3:B1:2D\" - \"52:54:00:BD:97:1E\" state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Description: Example host Locality: Lab Location: Lab Platform: Lenovo T61 Operating system: CentOS 7 Principal name: host/[email protected] Principal alias: host/[email protected] MAC address: 08:00:27:E3:B1:2D, 52:54:00:BD:97:1E Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure hosts with random password hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Hosts host01.idm.example.com and host02.idm.example.com present with random passwords ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" hosts: - name: host01.idm.example.com random: true force: true - name: host02.idm.example.com random: true force: true register: ipahost",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-are-present.yml [...] TASK [Hosts host01.idm.example.com and host02.idm.example.com present with random passwords] changed: [r8server.idm.example.com] => {\"changed\": true, \"host\": {\"host01.idm.example.com\": {\"randompassword\": \"0HoIRvjUdH0Ycbf6uYdWTxH\"}, \"host02.idm.example.com\": {\"randompassword\": \"5VdLgrf3wvojmACdHC3uA3s\"}}}",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Password: True Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host member IP addresses present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host101.example.com IP addresses present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com ip_address: - 192.168.0.123 - fe80::20c:29ff:fe02:a1b3 - 192.168.0.124 - fe80::20c:29ff:fe02:a1b4 force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-with-multiple-IP-addreses-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"ipa dnsrecord-show idm.example.com host01 [...] Record name: host01 A record: 192.168.0.123, 192.168.0.124 AAAA record: fe80::20c:29ff:fe02:a1b3, fe80::20c:29ff:fe02:a1b4",
"[ipaserver] server.idm.example.com",
"--- - name: Host absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com absent ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com updatedns: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa host-show host01.idm.example.com ipa: ERROR: host01.idm.example.com: host not found"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-hosts-using-Ansible-playbooks_managing-users-groups-hosts |
Appendix B. Maven Configuration Information | Appendix B. Maven Configuration Information B.1. Install the JBoss Enterprise Application Platform Repository Using Nexus This example outlines the steps to install the JBoss Enterprise Application Platform 6 Maven Repository using Sonatype Nexus Maven Repository Manager. For further instructions, see http://www.sonatype.org/nexus/ . Procedure B.1. Download the JBoss Enterprise Application Platform 6 Maven Repository ZIP archive Open a web browser and access the following URL: https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?product=appplatform . Find Application Platform 6 Maven Repository in the list. Click Download to download a ZIP file that contains the repository. Unzip the files into the desired target directory. Procedure B.2. Add the JBoss Enterprise Application Platform 6 Maven Repository using Nexus Maven Repository Manager Log into Nexus as an Administrator. Select the Repositories section from the Views Repositories menu to the left of your repository manager. Click the Add... drop-down menu, then select Hosted Repository . Provide a name and ID for the new repository. Enter the unzipped repository path in the Override Local Storage Location field. Continue if the artifact must be available in a repository group. If not, do not continue with this procedure. Select the repository group. Click on the Configure tab. Drag the new JBoss Maven repository from the Available Repositories list to the Ordered Group Repositories list on the left. Note The order of this list determines the priority for searching Maven artifacts. Result The repository is configured using Nexus Maven Repository Manager. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/appe-Maven_Configuration_Information |
Chapter 3. Using Red Hat Single Sign-On with Spring Boot | Chapter 3. Using Red Hat Single Sign-On with Spring Boot Red Hat Single Sign-On client adapters are libraries that make it very easy to secure applications and services with Red Hat Single Sign-On. You can use the Keycloak Spring Boot adapter to secure your Spring Boot project. 3.1. Using Red Hat Single Sign-On with Spring Boot Container To secure a Spring Boot application, add the Keycloak Spring Boot adapter JAR to your project. The Keycloak Spring Boot adapter takes advantage of Spring Boot's autoconfiguration feature so all you need to do is add the Keycloak Spring Boot starter to your project. Procedure To manually add the Keycloak Spring Boot starter, add the following to your project's pom.xml . Add the Adapter BOM dependency. Configure your Spring Boot project to use Keycloak. Instead of a keycloak.json file, you can configure the realm for the Spring Boot Keycloak adapter using the normal Spring Boot configuration. For example, add following configuration to src/main/resources/application.properties file. You can disable the Keycloak Spring Boot Adapter (for example in tests) by setting keycloak.enabled = false . To configure a Policy Enforcer, unlike keycloak.json , policy-enforcer-config must be used instead of just policy-enforcer . Specify the Java EE security configuration in the web.xml . The Spring Boot Adapter will set the login-method to KEYCLOAK and configure the security-constraints at the time of startup. An example configuration is given below. Note: If you plan to deploy your Spring Application as a WAR then do not use the Spring Boot Adapter. Use the dedicated adapter for the application server or servlet container you are using. Your Spring Boot should also contain a web.xml file. | [
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-spring-boot-starter</artifactId> </dependency>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.keycloak.bom</groupId> <artifactId>keycloak-adapter-bom</artifactId> <version>3.4.17.Final-redhat-00001</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"keycloak.realm = demorealm keycloak.auth-server-url = http://127.0.0.1:8080/auth keycloak.ssl-required = external keycloak.resource = demoapp keycloak.credentials.secret = 11111111-1111-1111-1111-111111111111 keycloak.use-resource-role-mappings = true",
"keycloak.securityConstraints[0].authRoles[0] = admin keycloak.securityConstraints[0].authRoles[1] = user keycloak.securityConstraints[0].securityCollections[0].name = insecure stuff keycloak.securityConstraints[0].securityCollections[0].patterns[0] = /insecure keycloak.securityConstraints[1].authRoles[0] = admin keycloak.securityConstraints[1].securityCollections[0].name = admin stuff keycloak.securityConstraints[1].securityCollections[0].patterns[0] = /admin"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_spring_boot/using-rh-sso-with-spring-boot |
Chapter 3. API configuration examples | Chapter 3. API configuration examples 3.1. external_registry_config object reference { "is_enabled": True, "external_reference": "quay.io/redhat/quay", "sync_interval": 5000, "sync_start_date": datetime(2020, 0o1, 0o2, 6, 30, 0), "external_registry_username": "fakeUsername", "external_registry_password": "fakePassword", "external_registry_config": { "verify_tls": True, "unsigned_images": False, "proxy": { "http_proxy": "http://insecure.proxy.corp", "https_proxy": "https://secure.proxy.corp", "no_proxy": "mylocalhost", }, }, } 3.2. rule_rule object reference { "root_rule": {"rule_kind": "tag_glob_csv", "rule_value": ["latest", "foo", "bar"]}, } | [
"{ \"is_enabled\": True, \"external_reference\": \"quay.io/redhat/quay\", \"sync_interval\": 5000, \"sync_start_date\": datetime(2020, 0o1, 0o2, 6, 30, 0), \"external_registry_username\": \"fakeUsername\", \"external_registry_password\": \"fakePassword\", \"external_registry_config\": { \"verify_tls\": True, \"unsigned_images\": False, \"proxy\": { \"http_proxy\": \"http://insecure.proxy.corp\", \"https_proxy\": \"https://secure.proxy.corp\", \"no_proxy\": \"mylocalhost\", }, }, }",
"{ \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [\"latest\", \"foo\", \"bar\"]}, }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_api_guide/api-config-examples |
25.3. Setting up the Challenge-Handshake Authentication Protocol | 25.3. Setting up the Challenge-Handshake Authentication Protocol After configuring an ACL and creating an iSCSI initiator, set up the Challenge-Handshake Authentication Protocol (CHAP). For more information on configuring an ACL and creating an iSCSI initiator, see Section 25.1.6, "Configuring ACLs" and Section 25.2, "Creating an iSCSI Initiator" . The CHAP allows the user to protect the target with a password. The initiator must be aware of this password to be able to connect to the target. Procedure 25.8. Setting up the CHAP for target Set attribute authentication: Set userid and password: Procedure 25.9. Setting up the CHAP for initiator Edit the iscsid.conf file: Enable the CHAP authentication in the iscsid.conf file: By default, the node.session.auth.authmethod option is set to None . Add target user name and password in the iscsid.conf file: Restart the iscsid service: For more information, see the targetcli and iscsiadm man pages. | [
"/iscsi/iqn.20...mple:444/tpg1> set attribute authentication=1 Parameter authentication is now '1'.",
"/iscsi/iqn.20...mple:444/tpg1> set auth userid= redhat Parameter userid is now 'redhat'. /iscsi/iqn.20...mple:444/tpg1> set auth password= redhat_passwd Parameter password is now 'redhat_passwd'.",
"vi /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP",
"node.session.auth.username = redhat node.session.auth.password = redhat_passwd",
"systemctl restart iscsid.service"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/osm-setting-up-the-challenge-handshake-authentication-protocol |
Chapter 7. Red Hat Quay sizing and subscriptions | Chapter 7. Red Hat Quay sizing and subscriptions Scalability of Red Hat Quay is one of its key strengths, with a single code base supporting a broad spectrum of deployment sizes, including the following: Proof of Concept deployment on a single development machine Mid-size deployment of approximately 2,000 users that can serve content to dozens of Kubernetes clusters High-end deployment such as Quay.io that can serve thousands of Kubernetes clusters world-wide Since sizing heavily depends on a multitude of factors, such as the number of users, images, concurrent pulls and pushes, there are no standard sizing recommendations. The following are the minimum requirements for systems running Red Hat Quay (per container/pod instance): Quay: minimum 6 GB; recommended 8 GB, 2 more more vCPUs Clair: recommended 2 GB RAM and 2 or more vCPUs Storage: : recommended 30 GB NooBaa: minimum 2 GB, 1 vCPU (when objectstorage component is selected by the Operator) Clair database: minimum 5 GB required for security metadata Stateless components of Red Hat Quay can be scaled out, but this will cause a heavier load on stateful backend services. 7.1. Red Hat Quay sample sizings The following table shows approximate sizing for Proof of Concept, mid-size, and high-end deployments. Whether a deployment runs appropriately with the same metrics depends on many factors not shown below. Metric Proof of concept Mid-size High End (Quay.io) No. of Quay containers by default 1 4 15 No. of Quay containers max at scale-out N/A 8 30 No. of Clair containers by default 1 3 10 No. of Clair containers max at scale-out N/A 6 15 No. of mirroring pods (to mirror 100 repositories) 1 5-10 N/A Database sizing 2 -4 Cores 6-8 GB RAM 10-20 GB disk 4-8 Cores 6-32 GB RAM 100 GB - 1 TB disk 32 cores 244 GB 1+ TB disk Object storage backend sizing 10-100 GB 1 - 20 TB 50+ TB up to PB Redis cache sizing 2 Cores 2-4 GB RAM 4 cores 28 GB RAM Underlying node sizing (physical or virtual) 4 Cores 8 GB RAM 4-6 Cores 12-16 GB RAM Quay: 13 cores 56GB RAM Clair: 2 cores 4 GB RAM For further details on sizing & related recommendations for mirroring, see the section on repository mirroring . The sizing for the Redis cache is only relevant if you use Quay builders, otherwise it is not significant. 7.2. Red Hat Quay subscription information Red Hat Quay is available with Standard or Premium support, and subscriptions are based on deployments. Note Deployment means an installation of a single Red Hat Quay registry using a shared data backend. With a Red Hat Quay subscription, the following options are available: There is no limit on the number of pods, such as Quay, Clair, Builder, and so on, that you can deploy. Red Hat Quay pods can run in multiple data centers or availability zones. Storage and database backends can be deployed across multiple data centers or availability zones, but only as a single, shared storage backend and single, shared database backend. Red Hat Quay can manage content for an unlimited number of clusters or standalone servers. Clients can access the Red Hat Quay deployment regardless of their physical location. You can deploy Red Hat Quay on OpenShift Container Platform infrastructure nodes to minimize subscription requirements. You can run the Container Security Operator (CSO) and the Quay Bridge Operator (QBO) on your OpenShift Container Platform clusters at no additional cost. Note Red Hat Quay geo-replication requires a subscription for each storage replication. The database, however, is shared. For more information about purchasing a Red Hat Quay subscription, see Red Hat Quay . 7.3. Using Red Hat Quay with or without internal registry Red Hat Quay can be used as an external registry in front of multiple OpenShift Container Platform clusters with their internal registries. Red Hat Quay can also be used in place of the internal registry when it comes to automating builds and deployment rollouts. The required coordination of Secrets and ImageStreams is automated by the Quay Bridge Operator, which can be launched from the OperatorHub for OpenShift Container Platform. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_architecture/sizing-intro |
10.3. Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later) | 10.3. Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later) As of Red Hat Enterprise Linux 7.3, you can modify general quorum options for your cluster with the pcs quorum update command. Executing this command requires that the cluster be stopped. For information on the quorum options, see the votequorum (5) man page. The format of the pcs quorum update command is as follows. The following series of commands modifies the wait_for_all quorum option and displays the updated status of the option. Note that the system does not allow you to execute this command while the cluster is running. | [
"pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[ time-in-ms ] [wait_for_all=[0|1]]",
"pcs quorum update wait_for_all=1 Checking corosync is not running on nodes Error: node1: corosync is running Error: node2: corosync is running pcs cluster stop --all node2: Stopping Cluster (pacemaker) node1: Stopping Cluster (pacemaker) node1: Stopping Cluster (corosync) node2: Stopping Cluster (corosync) pcs quorum update wait_for_all=1 Checking corosync is not running on nodes node2: corosync is not running node1: corosync is not running Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded pcs quorum config Options: wait_for_all: 1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-quorumoptmodify-HAAR |
CI/CD overview | CI/CD overview OpenShift Container Platform 4.12 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/cicd_overview/index |
Chapter 8. Managing VMs | Chapter 8. Managing VMs 8.1. Installing the QEMU guest agent and VirtIO drivers The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks. You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 8.1.1. Installing the QEMU guest agent 8.1.1.1. Installing the QEMU guest agent on a Linux VM The qemu-guest-agent is available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs) To create snapshots of a VM in the Running state with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which a snapshot is taken are reflected in the snapshot indications that are displayed in the web console or CLI. If these conditions do not meet your requirements, try creating the snapshot again, or use an offline snapshot Procedure Log in to the VM by using a console or SSH. Install the QEMU guest agent by running the following command: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent Verification Run the following command to verify that AgentConnected is listed in the VM spec: USD oc get vm <vm_name> 8.1.1.2. Installing the QEMU guest agent on a Windows VM For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM. To create snapshots of a VM in the Running state with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. Note that in a Windows guest operating system, quiescing also requires the Volume Shadow Copy Service (VSS). Therefore, before you create a snapshot, ensure that VSS is enabled on the VM as well. The conditions under which a snapshot is taken are reflected in the snapshot indications that are displayed in the web console or CLI. If these conditions do not meet your requirements, try creating the snapshot again or use an offline snapshot. Procedure In the Windows guest operating system, use the File Explorer to navigate to the guest-agent directory in the virtio-win CD drive. Run the qemu-ga-x86_64.msi installer. Verification Obtain a list of network services by running the following command: USD net start Verify that the output contains the QEMU Guest Agent . 8.1.2. Installing VirtIO drivers on Windows VMs VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download. The container-native-virtualization/virtio-win container disk must be attached to the VM as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the VM. Table 8.1. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes labeled as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 8.1.2.1. Attaching VirtIO container disk to Windows VMs during installation You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM. Procedure When creating a Windows VM from a template, click Customize VirtualMachine . Select Mount Windows drivers disk . Click the Customize VirtualMachine parameters . Click Create VirtualMachine . After the VM is created, the virtio-win SATA CD disk will be attached to the VM. 8.1.2.2. Attaching VirtIO container disk to an existing Windows VM You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM. Procedure Navigate to the existing Windows VM, and click Actions Stop . Go to VM Details Configuration Disks and click Add disk . Add windows-driver-disk from container source, set the Type to CD-ROM , and then set the Interface to SATA . Click Save . Start the VM, and connect to a graphical console. 8.1.2.3. Installing VirtIO drivers during Windows installation You can install the VirtIO drivers while installing Windows on a virtual machine (VM). Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Prerequisites A storage device containing the virtio drivers must be attached to the VM. Procedure In the Windows operating system, use the File Explorer to navigate to the virtio-win CD drive. Double-click the drive to run the appropriate installer for your VM. For a 64-bit vCPU, select the virtio-win-gt-x64 installer. 32-bit vCPUs are no longer supported. Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default. After the installation is complete, select Finish . Reboot the VM. Verification Open the system disk on the PC. This is typically C: . Navigate to Program Files Virtio-Win . If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful. 8.1.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM). Note This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps. Prerequisites A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive. Procedure Start the VM and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the VM to complete the driver installation. 8.1.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive. Tip Downloading the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog is not mandatory, because the container disk is downloaded from the Red Hat registry if it not already present in the cluster. However, downloading reduces the installation time. Prerequisites You must have access to the Red Hat registry or to the downloaded container-native-virtualization/virtio-win container disk in a restricted environment. Procedure Add the container-native-virtualization/virtio-win container disk as a CD drive by editing the VirtualMachine manifest: # ... spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots the VM disks in the order defined in the VirtualMachine manifest. You can either define other VM disks that boot before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks. Apply the changes: If the VM is not running, run the following command: USD virtctl start <vm> -n <namespace> If the VM is running, reboot the VM or run the following command: USD oc apply -f <vm.yaml> After the VM has started, install the VirtIO drivers from the SATA CD drive. 8.1.3. Updating VirtIO drivers 8.1.3.1. Updating VirtIO drivers on a Windows VM Update the virtio drivers on a Windows virtual machine (VM) by using the Windows Update service. Prerequisites The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service. Procedure In the Windows Guest operating system, click the Windows key and select Settings . Navigate to Windows Update Advanced Options Optional Updates . Install all updates from Red Hat, Inc. . Reboot the VM. Verification On the Windows VM, navigate to the Device Manager . Select a device. Select the Driver tab. Click Driver Details and confirm that the virtio driver details displays the correct version. 8.2. Connecting to virtual machine consoles You can connect to the following consoles to access running virtual machines (VMs): VNC console Serial console Desktop viewer for Windows VMs 8.2.1. Connecting to the VNC console You can connect to the VNC console of a virtual machine by using the Red Hat OpenShift Service on AWS web console or the virtctl command line tool. 8.2.1.1. Connecting to the VNC console by using the web console You can connect to the VNC console of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Note If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list. Select Ctl + Alt + 1 from the Send key list to restore the default display. To end the console session, click outside the console pane and then click Disconnect . 8.2.1.2. Connecting to the VNC console by using virtctl You can use the virtctl command line tool to connect to the VNC console of a running virtual machine. Note If you run the virtctl vnc command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh command with the -X or -Y flags. Prerequisites You must install the virt-viewer package. Procedure Run the following command to start the console session: USD virtctl vnc <vm_name> If the connection fails, run the following command to collect troubleshooting information: USD virtctl vnc <vm_name> -v 4 8.2.1.3. Generating a temporary token for the VNC console To access the VNC of a virtual machine (VM), generate a temporary authentication bearer token for the Kubernetes API. Note Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command. Prerequisites A running VM with OpenShift Virtualization 4.14 or later and ssp-operator 4.14 or later Procedure Enable the feature gate in the HyperConverged ( HCO ) custom resource (CR): USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]' Generate a token by entering the following command: USD curl --header "Authorization: Bearer USD{TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>" The <duration> parameter can be set in hours and minutes, with a minimum duration of 10 minutes. For example: 5h30m . If this parameter is not set, the token is valid for 10 minutes by default. Sample output: { "token": "eyJhb..." } Optional: Use the token provided in the output to create a variable: USD export VNC_TOKEN="<token>" You can now use the token to access the VNC console of a VM. Verification Log in to the cluster by entering the following command: USD oc login --token USD{VNC_TOKEN} Test access to the VNC console of the VM by using the virtctl command: USD virtctl vnc <vm_name> -n <namespace> Warning It is currently not possible to revoke a specific token. To revoke a token, you must delete the service account that was used to create it. However, this also revokes all other tokens that were created by using the service account. Use the following command with caution: USD virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access" 8.2.1.3.1. Granting token generation permission for the VNC console by using the cluster role As a cluster administrator, you can install a cluster role and bind it to a user or service account to allow access to the endpoint that generates tokens for the VNC console. Procedure Choose to bind the cluster role to either a user or service account. Run the following command to bind the cluster role to a user: USD kubectl create rolebinding "USD{ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --user="USD{USER_NAME}" Run the following command to bind the cluster role to a service account: USD kubectl create rolebinding "USD{ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="USD{SERVICE_ACCOUNT_NAME}" 8.2.2. Connecting to the serial console You can connect to the serial console of a virtual machine by using the Red Hat OpenShift Service on AWS web console or the virtctl command line tool. Note Running concurrent VNC connections to a single virtual machine is not currently supported. 8.2.2.1. Connecting to the serial console by using the web console You can connect to the serial console of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background. Select Serial console from the console list. To end the console session, click outside the console pane and then click Disconnect . 8.2.2.2. Connecting to the serial console by using virtctl You can use the virtctl command line tool to connect to the serial console of a running virtual machine. Procedure Run the following command to start the console session: USD virtctl console <vm_name> Press Ctrl+] to end the console session. 8.2.3. Connecting to the desktop viewer You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP). 8.2.3.1. Connecting to the desktop viewer by using the web console You can connect to the desktop viewer of a Windows virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Prerequisites You installed the QEMU guest agent on the Windows VM. You have an RDP client installed. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background. Select Desktop viewer from the console list. Click Create RDP Service to open the RDP Service dialog. Select Expose RDP Service and click Save to create a node port service. Click Launch Remote Desktop to download an .rdp file and launch the desktop viewer. 8.3. Configuring SSH access to virtual machines You can configure SSH access to virtual machines (VMs) by using the following methods: virtctl ssh command You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key. You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. virtctl port-forward command You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH. Service You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. Secondary network You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address. 8.3.1. Access configuration considerations Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements. Services provide excellent performance and are recommended for applications that are accessed from outside the cluster. If the internal cluster network cannot handle the traffic load, you can configure a secondary network. virtctl ssh and virtctl port-forwarding commands Simple to configure. Recommended for troubleshooting VMs. virtctl port-forwarding recommended for automated configuration of VMs with Ansible. Dynamic public SSH keys can be used to provision VMs with Ansible. Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server. The API server must be able to handle the traffic load. The clients must be able to access the API server. The clients must have access credentials for the cluster. Cluster IP service The internal cluster network must be able to handle the traffic load. The clients must be able to access an internal cluster IP address. Node port service The internal cluster network must be able to handle the traffic load. The clients must be able to access at least one node. Load balancer service A load balancer must be configured. Each node must be able to handle the traffic load of one or more load balancer services. Secondary network Excellent performance because traffic does not go through the internal cluster network. Allows a flexible approach to network topology. Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network. 8.3.2. Using virtctl ssh You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the virtctl ssh command. This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server. 8.3.2.1. About static and dynamic SSH key management You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. Static SSH key management You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot. You can add the key by using one of the following methods: Add a key to a single VM when you create it by using the web console or the command line. Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project. Use cases As a VM owner, you can provision all your newly created VMs with a single key. Dynamic SSH key management You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources. When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM. Use cases Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a Secret object that is applied to all VMs in a namespace. User access: You can add your access credentials to all VMs that you create and manage. Ansible provisioning: As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning. As a VM owner, you can create a VM and attach the keys used for Ansible provisioning. Key rotation: As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace. As a workload owner, you can rotate the key for the VMs that you manage. 8.3.2.2. Static key management You can add a statically managed public SSH key when you create a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time. You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create. Note If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually. 8.3.2.2.1. Adding a key when creating a VM from a template You can add a statically managed public SSH key when you create a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile. The guest operating system must support configuration from a cloud-init data source. Click Customize VirtualMachine . Click . Click the Scripts tab. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Click Create VirtualMachine . The VirtualMachine details page displays the progress of the VM creation. Verification Click the Scripts tab on the Configuration tab. The secret name is displayed in the Authorized SSH key section. 8.3.2.2.2. Creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list. You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Procedure In the web console, navigate to Virtualization Catalog . The InstanceTypes tab opens by default. Select either of the following options: Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save . Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link. In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon to the Select volume to boot from line. Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button. Click an instance type tile and select the resource size appropriate for your workload. Optional: Choose the virtual machine details, including the VM's name, that apply to the volume you are booting from: For a Linux-based volume, follow these steps to configure SSH: If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Follow these steps: Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . For a Windows volume, follow either of these set of steps to configure sysprep options: If you have not already added sysprep options for the Windows volume, follow these steps: Click the edit icon beside Sysprep in the VirtualMachine details section. Add the Autoattend.xml answer file. Add the Unattend.xml answer file. Click Save . If you want to use existing sysprep options for the Windows volume, follow these steps: Click Attach existing sysprep . Enter the name of the existing sysprep Unattend.xml answer file. Click Save . Optional: If you are creating a Windows VM, you can mount a Windows driver disk: Click the Customize VirtualMachine button. On the VirtualMachine details page, click Storage . Select the Mount Windows drivers disk checkbox. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 8.3.2.2.3. Adding a key when creating a VM by using the command line You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot. The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Create a manifest file for a VirtualMachine object and a Secret object: Example manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3 1 Specify the cloudInitNoCloud data source. 2 Specify the Secret object name. 3 Paste the public SSH key. Create the VirtualMachine and Secret objects by running the following command: USD oc create -f <manifest_file>.yaml Start the VM by running the following command: USD virtctl start vm example-vm -n example-namespace Verification Get the VM configuration: USD oc describe vm example-vm -n example-namespace Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys # ... 8.3.2.3. Dynamic key management You can enable dynamic key injection for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console or the command line. Then, you can update the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created. 8.3.2.3.1. Enabling dynamic key injection when creating a VM from a template You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the Red Hat OpenShift Service on AWS web console. Then, you can update the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Navigate to Virtualization Catalog in the web console. Click the Red Hat Enterprise Linux 9 VM tile. Click Customize VirtualMachine . Click . Click the Scripts tab. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Set Dynamic SSH key injection to on. Click Save . Click Create VirtualMachine . The VirtualMachine details page displays the progress of the VM creation. Verification Click the Scripts tab on the Configuration tab. The secret name is displayed in the Authorized SSH key section. 8.3.2.3.2. Creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list. You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. Then, you can add or revoke the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9. Procedure In the web console, navigate to Virtualization Catalog . The InstanceTypes tab opens by default. Select either of the following options: Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save . Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link. In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon to the Select volume to boot from line. Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button. Click an instance type tile and select the resource size appropriate for your workload. Click the Red Hat Enterprise Linux 9 VM tile. Optional: Choose the virtual machine details, including the VM's name, that apply to the volume you are booting from: For a Linux-based volume, follow these steps to configure SSH: If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Follow these steps: Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . For a Windows volume, follow either of these set of steps to configure sysprep options: If you have not already added sysprep options for the Windows volume, follow these steps: Click the edit icon beside Sysprep in the VirtualMachine details section. Add the Autoattend.xml answer file. Add the Unattend.xml answer file. Click Save . If you want to use existing sysprep options for the Windows volume, follow these steps: Click Attach existing sysprep . Enter the name of the existing sysprep Unattend.xml answer file. Click Save . Set Dynamic SSH key injection in the VirtualMachine details section to on. Optional: If you are creating a Windows VM, you can mount a Windows driver disk: Click the Customize VirtualMachine button. On the VirtualMachine details page, click Storage . Select the Mount Windows drivers disk checkbox. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 8.3.2.3.3. Enabling dynamic SSH key injection by using the web console You can enable dynamic key injection for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Then, you can update the public SSH key at runtime. The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9. Prerequisites The guest operating system is RHEL 9. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. On the Configuration tab, click Scripts . If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Set Dynamic SSH key injection to on. Click Save . 8.3.2.3.4. Enabling dynamic key injection by using the command line You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Create a manifest file for a VirtualMachine object and a Secret object: Example manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3 1 Specify the cloudInitNoCloud data source. 2 Specify the Secret object name. 3 Paste the public SSH key. Create the VirtualMachine and Secret objects by running the following command: USD oc create -f <manifest_file>.yaml Start the VM by running the following command: USD virtctl start vm example-vm -n example-namespace Verification Get the VM configuration: USD oc describe vm example-vm -n example-namespace Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys # ... 8.3.2.4. Using the virtctl ssh command You can access a running virtual machine (VM) by using the virtcl ssh command. Prerequisites You installed the virtctl command line tool. You added a public SSH key to the VM. You have an SSH client installed. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Run the virtctl ssh command: USD virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1 1 Specify the namespace, user name, and the SSH private key. The default SSH key location is /home/user/.ssh . If you save the key in a different location, you must specify the path. Example USD virtctl -n my-namespace ssh cloud-user@example-vm -i my-key Tip You can copy the virtctl ssh command in the web console by selecting Copy SSH command from the options menu beside a VM on the VirtualMachines page. 8.3.3. Using the virtctl port-forward command You can use your local OpenSSH client and the virtctl port-forward command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs. This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server. Prerequisites You have installed the virtctl client. The virtual machine you want to access is running. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Add the following text to the ~/.ssh/config file on your client machine: Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p Connect to the VM by running the following command: USD ssh <user>@vm/<vm_name>.<namespace> 8.3.4. Using a service for SSH access You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service. Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls. If the cluster network cannot handle the traffic load, consider using a secondary network for VM access. 8.3.4.1. About services A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world. ClusterIP Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. ClusterIP is the default service type. NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For Red Hat OpenShift Service on AWS, you must use externalTrafficPolicy: Cluster when configuring a load-balancing service, to minimize the network downtime during live migration. 8.3.4.2. Creating a service You can create a service to expose a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console, virtctl command line tool, or a YAML file. 8.3.4.2.1. Enabling load balancer service creation by using the web console You can enable the creation of load balancer services for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Prerequisites You have configured a load balancer for the cluster. You are logged in as a user with the cluster-admin role. You created a network attachment definition for the network. Procedure Navigate to Virtualization Overview . On the Settings tab, click Cluster . Expand General settings and SSH configuration . Set SSH over LoadBalancer service to on. 8.3.4.2.2. Creating a service by using the web console You can create a node port or load balancer service for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Prerequisites You configured the cluster network to support either a load balancer or a node port. To create a load balancer service, you enabled the creation of load balancer services. Procedure Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page. On the Details tab, select SSH over LoadBalancer from the SSH service type list. Optional: Click the copy icon to copy the SSH command to your clipboard. Verification Check the Services pane on the Details tab to view the new service. 8.3.4.2.3. Creating a service by using virtctl You can create a service for a virtual machine (VM) by using the virtctl command line tool. Prerequisites You installed the virtctl command line tool. You configured the cluster network to support the service. The environment where you installed virtctl has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Create a service by running the following command: USD virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1 1 Specify the ClusterIP , NodePort , or LoadBalancer service type. Example USD virtctl expose vm example-vm --name example-service --type NodePort --port 22 Verification Verify the service by running the following command: USD oc get service steps After you create a service with virtctl , you must add special: key to the spec.template.metadata.labels stanza of the VirtualMachine manifest. See Creating a service by using the command line . 8.3.4.2.4. Creating a service by using the command line You can create a service and associate it with a virtual machine (VM) by using the command line. Prerequisites You configured the cluster network to support the service. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1 # ... 1 Add special: key to the spec.template.metadata.labels stanza. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000 1 Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest. 2 Specify ClusterIP , NodePort , or LoadBalancer . 3 Specifies a collection of network ports and protocols that you want to expose from the virtual machine. Save the Service manifest file. Create the service by running the following command: USD oc create -f example-service.yaml Restart the VM to apply the changes. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace 8.3.4.3. Connecting to a VM exposed by a service by using SSH You can connect to a virtual machine (VM) that is exposed by a service by using SSH. Prerequisites You created a service to expose the VM. You have an SSH client installed. You are logged in to the cluster. Procedure Run the following command to access the VM: USD ssh <user_name>@<ip_address> -p <port> 1 1 Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service. 8.3.5. Using a secondary network for SSH access You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH. Important Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method. See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options. Prerequisites You configured a secondary network . You created a network attachment definition . 8.3.5.1. Configuring a VM network interface by using the web console You can configure a network interface for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Prerequisites You created a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name and select the network attachment definition from the Network list. Click Save . Restart the VM to apply the changes. 8.3.5.2. Connecting to a VM attached to a secondary network by using SSH You can connect to a virtual machine (VM) attached to a secondary network by using SSH. Prerequisites You attached a VM to a secondary network with a DHCP server. You have an SSH client installed. Procedure Obtain the IP address of the VM by running the following command: USD oc describe vm <vm_name> -n <namespace> Example output Connect to the VM by running the following command: USD ssh <user_name>@<ip_address> -i <ssh_key> Example USD ssh [email protected] -i ~/.ssh/id_rsa_cloud-user 8.4. Editing virtual machines You can update a virtual machine (VM) configuration by using the Red Hat OpenShift Service on AWS web console. You can update the YAML file or the VirtualMachine details page. You can also edit a VM by using the command line. 8.4.1. Changing the instance type of a VM You can change the instance type associated with a running virtual machine (VM) by using the web console. The change takes effect immediately. Prerequisites You created the VM by using an instance type. Procedure In the Red Hat OpenShift Service on AWS web console, click Virtualization VirtualMachines . Select a VM to open the VirtualMachine details page. Click the Configuration tab. On the Details tab, click the instance type text to open the Edit Instancetype dialog. For example, click 1 CPU | 2 GiB Memory . Edit the instance type by using the Series and Size lists. Select an item from the Series list to show the relevant sizes for that series. For example, select General Purpose . Select the VM's new instance type from the Size list. For example, select medium: 1 CPUs, 4Gi Memory , which is available in the General Purpose series. Click Save . Verification Click the YAML tab. Click Reload . Review the VM YAML to confirm that the instance type changed. 8.4.2. Hot plugging memory on a virtual machine You can add or remove the amount of memory allocated to a virtual machine (VM) without having to restart the VM by using the Red Hat OpenShift Service on AWS web console. Procedure Navigate to Virtualization VirtualMachines . Select the required VM to open the VirtualMachine details page. On the Configuration tab, click Edit CPU|Memory . Enter the desired amount of memory and click Save . The system applies these changes immediately. If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a RestartRequired condition is added to the VM. Note Linux guests require a kernel version of 5.16 or later and Windows guests require the latest viomem drivers. 8.4.3. Hot plugging CPUs on a virtual machine You can increase or decrease the number of CPU sockets allocated to a virtual machine (VM) without having to restart the VM by using the Red Hat OpenShift Service on AWS web console. Procedure Navigate to Virtualization VirtualMachines . Select the required VM to open the VirtualMachine details page. On the Configuration tab, click Edit CPU|Memory . Select the vCPU radio button. Enter the desired number of vCPU sockets and click Save . If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a RestartRequired condition is added to the VM. 8.4.4. Editing a virtual machine by using the command line You can edit a virtual machine (VM) by using the command line. Prerequisites You installed the oc CLI. Procedure Obtain the virtual machine configuration by running the following command: USD oc edit vm <vm_name> Edit the YAML configuration. If you edit a running virtual machine, you need to do one of the following: Restart the virtual machine. Run the following command for the new configuration to take effect: USD oc apply vm <vm_name> -n <namespace> 8.4.5. Adding a disk to a virtual machine You can add a virtual disk to a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. On the Disks tab, click Add disk . Specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . Note If the VM is running, you must restart the VM to apply the change. 8.4.5.1. Storage fields Field Description Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. If you do not specify these parameters, the system uses the default storage profile values. Parameter Option Parameter description Volume Mode Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. 8.4.6. Mounting a Windows driver disk on a virtual machine You can mount a Windows driver disk on a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Procedure Navigate to Virtualization VirtualMachines . Select the required VM to open the VirtualMachine details page. On the Configuration tab, click Storage . Select the Mount Windows drivers disk checkbox. The Windows driver disk is displayed in the list of mounted disks. 8.4.7. Adding a secret, config map, or service account to a virtual machine You add a secret, config map, or service account to a virtual machine by using the Red Hat OpenShift Service on AWS web console. These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk. If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page. Prerequisites The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click Configuration Environment . Click Add Config Map, Secret or Service Account . Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource. Optional: Click Reload to revert the environment to its last saved state. Click Save . Verification On the VirtualMachine details page, click Configuration Disks and verify that the resource is displayed in the list of disks. Restart the virtual machine by clicking Actions Restart . You can now mount the secret, config map, or service account as you would mount any other disk. Additional resources for config maps, secrets, and service accounts Understanding config maps Providing sensitive data to pods Understanding and creating service accounts 8.5. Editing boot order You can update the values for a boot order list by using the web console or the CLI. With Boot Order in the Virtual Machine Overview page, you can: Select a disk or network interface controller (NIC) and add it to the boot order list. Edit the order of the disks or NICs in the boot order list. Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources. 8.5.1. Adding items to a boot order list in the web console Add items to a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine. Add any additional disks or NICs to the boot order list. Click Save . Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 8.5.2. Editing a boot order list in the web console Edit the boot order list in the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Choose the appropriate method to move the item in the boot order list: If you do not use a screen reader, hover over the arrow icon to the item that you want to move, drag the item up or down, and drop it in a location of your choice. If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice. Click Save . Note If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 8.5.3. Editing a boot order list in the YAML configuration file Edit the boot order list in a YAML configuration file by using the CLI. Procedure Open the YAML configuration file for the virtual machine by running the following command: USD oc edit vm <vm_name> -n <namespace> Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example: disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default 1 The boot order value specified for the disk. 2 The boot order value specified for the network interface controller. Save the YAML file. 8.5.4. Removing items from a boot order list in the web console Remove items from a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Click the Remove icon to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 8.6. Deleting virtual machines You can delete a virtual machine from the web console or by using the oc command line interface. 8.6.1. Deleting a virtual machine using the web console Deleting a virtual machine permanently removes it from the cluster. Procedure In the Red Hat OpenShift Service on AWS console, click Virtualization VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Delete . Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions Delete . Optional: Select With grace period or clear Delete disks . Click Delete to permanently delete the virtual machine. 8.6.2. Deleting a virtual machine by using the CLI You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines. Prerequisites Identify the name of the virtual machine that you want to delete. Procedure Delete the virtual machine by running the following command: USD oc delete vm <vm_name> Note This command only deletes a VM in the current project. Specify the -n <project_name> option if the VM you want to delete is in a different project or namespace. 8.7. Exporting virtual machines You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes. You create a VirtualMachineExport custom resource (CR) by using the command line interface. Alternatively, you can use the virtctl vmexport command to create a VirtualMachineExport CR and to download exported volumes. Note You can migrate virtual machines between OpenShift Virtualization clusters by using the Migration Toolkit for Virtualization . 8.7.1. Creating a VirtualMachineExport custom resource You can create a VirtualMachineExport custom resource (CR) to export the following objects: Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM. VM snapshot: Exports PVCs contained in a VirtualMachineSnapshot CR. PVC: Exports a PVC. If the PVC is used by another pod, such as the virt-launcher pod, the export remains in a Pending state until the PVC is no longer in use. The VirtualMachineExport CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress or Route . The export server supports the following file formats: raw : Raw disk image file. gzip : Compressed disk image file. dir : PVC directory and files. tar.gz : Compressed PVC file. Prerequisites The VM must be shut down for a VM export. Procedure Create a VirtualMachineExport manifest to export a volume from a VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim CR according to the following example and save it as example-export.yaml : VirtualMachineExport example apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3 1 Specify the appropriate API group: "kubevirt.io" for VirtualMachine . "snapshot.kubevirt.io" for VirtualMachineSnapshot . "" for PersistentVolumeClaim . 2 Specify VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim . 3 Optional. The default duration is 2 hours. Create the VirtualMachineExport CR: USD oc create -f example-export.yaml Get the VirtualMachineExport CR: USD oc get vmexport example-export -o yaml The internal and external links for the exported volumes are displayed in the status stanza: Output example apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: "" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: "2022-06-21T14:10:09Z" reason: podReady status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-06-21T14:09:02Z" reason: pvcBound status: "True" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export 1 External links are accessible from outside the cluster by using an Ingress or Route . 2 Internal links are only valid inside the cluster. 8.7.2. Accessing exported virtual machine manifests After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine manifest and related information from the export server. Prerequisites You exported a virtual machine or VM snapshot by creating a VirtualMachineExport custom resource (CR). Note VirtualMachineExport objects that have the spec.source.kind: PersistentVolumeClaim parameter do not generate virtual machine manifests. Procedure To access the manifests, you must first copy the certificates from the source cluster to the target cluster. Log in to the source cluster. Save the certificates to the cacert.crt file by running the following command: USD oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1 1 Replace <export_name> with the metadata.name value from the VirtualMachineExport object. Copy the cacert.crt file to the target cluster. Decode the token in the source cluster and save it to the token_decode file by running the following command: USD oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1 1 Replace <export_name> with the metadata.name value from the VirtualMachineExport object. Copy the token_decode file to the target cluster. Get the VirtualMachineExport custom resource by running the following command: USD oc get vmexport <export_name> -o yaml Review the status.links stanza, which is divided into external and internal sections. Note the manifests.url fields within each section: Example output apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: #... links: external: #... manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: #... manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export 1 Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the public certificate for the external URL's ingress or route. 2 Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token. 3 Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the certificate for the internal URL's export server. Log in to the target cluster. Get the Secret manifest by running the following command: USD curl --cacert cacert.crt <secret_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml" 1 Replace <secret_manifest_url> with an auth-header-secret URL from the VirtualMachineExport YAML output. 2 Reference the token_decode file that you created earlier. For example: USD curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml" Get the manifests of type: all , such as the ConfigMap and VirtualMachine manifests, by running the following command: USD curl --cacert cacert.crt <all_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml" 1 Replace <all_manifest_url> with a URL from the VirtualMachineExport YAML output. 2 Reference the token_decode file that you created earlier. For example: USD curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml" steps You can now create the ConfigMap and VirtualMachine objects on the target cluster by using the exported manifests. 8.8. Managing virtual machine instances If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI). The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port. 8.8.1. About virtual machine instances A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI). A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs: List standalone VMIs and their details. Edit labels and annotations for a standalone VMI. Delete a standalone VMI. When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects. Note Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs. When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the RestartRequired VM condition. Changes are effective on the reboot, and the condition is removed. 8.8.2. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 8.8.3. Listing standalone virtual machine instances using the web console Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs). Note VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI. Procedure Click Virtualization VirtualMachines from the side menu. You can identify a standalone VMI by a dark colored badge to its name. 8.8.4. Editing a standalone virtual machine instance using the web console You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable. Procedure In the Red Hat OpenShift Service on AWS console, click Virtualization VirtualMachines from the side menu. Select a standalone VMI to open the VirtualMachineInstance details page. On the Details tab, click the pencil icon beside Annotations or Labels . Make the relevant changes and click Save . 8.8.5. Deleting a standalone virtual machine instance using the CLI You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI). Prerequisites Identify the name of the VMI that you want to delete. Procedure Delete the VMI by running the following command: USD oc delete vmi <vmi_name> 8.8.6. Deleting a standalone virtual machine instance using the web console Delete a standalone virtual machine instance (VMI) from the web console. Procedure In the Red Hat OpenShift Service on AWS web console, click Virtualization VirtualMachines from the side menu. Click Actions Delete VirtualMachineInstance . In the confirmation pop-up window, click Delete to permanently delete the standalone VMI. 8.9. Controlling virtual machine states You can stop, start, restart, pause, and unpause virtual machines from the web console. You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port. 8.9.1. Starting a virtual machine You can start a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to start. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Start VirtualMachine . To view comprehensive information about the selected virtual machine before you start it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Start . Note When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes. 8.9.2. Stopping a virtual machine You can stop a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to stop. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Stop VirtualMachine . To view comprehensive information about the selected virtual machine before you stop it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Stop . 8.9.3. Restarting a virtual machine You can restart a running virtual machine from the web console. Important To avoid errors, do not restart a virtual machine while it has a status of Importing . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to restart. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Restart . To view comprehensive information about the selected virtual machine before you restart it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Restart . 8.9.4. Pausing a virtual machine You can pause a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to pause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Pause VirtualMachine . To view comprehensive information about the selected virtual machine before you pause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Pause . 8.9.5. Unpausing a virtual machine You can unpause a paused virtual machine from the web console. Prerequisites At least one of your virtual machines must have a status of Paused . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to unpause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Unpause VirtualMachine . To view comprehensive information about the selected virtual machine before you unpause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Unpause . 8.9.6. Controlling the state of multiple virtual machines You can start, stop, restart, pause, and unpause multiple virtual machines from the web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Optional: To limit the number of displayed virtual machines, select a relevant project from the Projects list. Select a checkbox to the virtual machines you want to work with. To select all virtual machines, click the checkbox in the VirtualMachines table header. Click Actions and select the intended action from the menu. 8.10. Using virtual Trusted Platform Module devices Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine (VM) or VirtualMachineInstance (VMI) manifest. Important With OpenShift Virtualization 4.18 and newer, you can export virtual machines (VMs) with attached vTPM devices, create snapshots of these VMs , and restore VMs from these snapshots . However, cloning a VM with a vTPM device attached to it or creating a new VM from its snapshot is not supported. 8.10.1. About vTPM devices A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip. A vTPM device also protects virtual machines by storing secrets without physical hardware. OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the vmStateStorageClass attribute in the HyperConverged custom resource (CR): kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name> # ... If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one. 8.10.2. Adding a vTPM device to a virtual machine Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Run the following command to update the VM configuration: USD oc edit vm <vm_name> -n <namespace> Edit the VM specification to add the vTPM device. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2 # ... 1 Adds the vTPM device to the VM. 2 Specifies that the vTPM device state persists after the VM is shut down. The default value is false . To apply your changes, save and exit the editor. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 8.11. Managing virtual machines with OpenShift Pipelines Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container. By using OpenShift Pipelines tasks and the example pipeline, you can do the following: Create and manage virtual machines (VMs), persistent volume claims (PVCs), data volumes, and data sources. Run commands in VMs. Manipulate disk images with libguestfs tools. The tasks are located in the task catalog (ArtifactHub) . The example Windows pipeline is located in the pipeline catalog (ArtifactHub) . 8.11.1. Prerequisites You have access to an Red Hat OpenShift Service on AWS cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed OpenShift Pipelines . 8.11.2. Supported virtual machine tasks The following table shows the supported tasks. Table 8.2. Supported virtual machine tasks Task Description create-vm-from-manifest Create a virtual machine from a provided manifest or with virtctl . create-vm-from-template Create a virtual machine from a template. copy-template Copy a virtual machine template. modify-vm-template Modify a virtual machine template. modify-data-object Create or delete data volumes or data sources. cleanup-vm Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. disk-virt-customize Use the virt-customize tool to run a customization script on a target PVC. disk-virt-sysprep Use the virt-sysprep tool to run a sysprep script on a target PVC. wait-for-vmi-status Wait for a specific status of a virtual machine instance and fail or succeed based on the status. Note Virtual machine creation in pipelines now utilizes ClusterInstanceType and ClusterPreference instead of template-based tasks, which have been deprecated. The create-vm-from-template , copy-template , and modify-vm-template commands remain available but are not used in default pipeline tasks. 8.11.3. Windows EFI installer pipeline You can run the Windows EFI installer pipeline by using the web console or CLI. The Windows EFI installer pipeline installs Windows 10, Windows 11, or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process. Note The Windows EFI installer pipeline uses a config map file with sysprep predefined by Red Hat OpenShift Service on AWS and suitable for Microsoft ISO files. For ISO files pertaining to different Windows editions, it may be necessary to create a new config map file with a system-specific sysprep definition. 8.11.3.1. Running the example pipelines using the web console You can run the example pipelines from the Pipelines menu in the web console. Procedure Click Pipelines Pipelines in the side menu. Select a pipeline to open the Pipeline details page. From the Actions list, select Start . The Start Pipeline dialog is displayed. Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status. 8.11.3.2. Running the example pipelines using the CLI Use a PipelineRun resource to run the example pipelines. A PipelineRun object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun object for each task in the pipeline. Procedure To run the Microsoft Windows 11 installer pipeline, create the following PipelineRun manifest: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107 1 Specify the URL for the Windows 11 64-bit ISO file. The product's language must be English (United States). 2 Example PipelineRun objects have a special parameter, acceptEula . By setting this parameter, you are agreeing to the applicable Microsoft user license agreements for each deployment or installation of the Microsoft products. If you set it to false, the pipeline exits at the first task. Apply the PipelineRun manifest: USD oc apply -f windows11-customize-run.yaml 8.11.4. Additional resources Creating CI/CD solutions for applications using Red Hat OpenShift Pipelines Creating a Windows VM 8.12. Advanced virtual machine management 8.12.1. Working with resource quotas for virtual machines Create and manage resource quotas for virtual machines. 8.12.1.1. Setting resource quota limits for virtual machines Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests. Procedure Set limits for a VM by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: runStrategy: Halted template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1 1 This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value. Save the VirtualMachine manifest. 8.12.1.2. Additional resources Resource quotas per project Resource quotas across multiple projects 8.12.2. Specifying nodes for virtual machines You can place virtual machines (VMs) on specific nodes by using node placement rules. 8.12.2.1. About node placement for virtual machines To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if: You have several VMs. To ensure fault tolerance, you want them to run on different nodes. You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node. Your VMs require specific hardware features that are not present on all available nodes. You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities. Note Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes. You can use the following rule types in the spec field of a VirtualMachine manifest: nodeSelector Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object. tolerations Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint. Note Affinity rules only apply during scheduling. Red Hat OpenShift Service on AWS does not reschedule running workloads if the constraints are no longer met. 8.12.2.2. Node placement examples The following example YAML file snippets use nodePlacement , affinity , and tolerations fields to customize node placement for virtual machines. 8.12.2.2.1. Example: VM node placement with nodeSelector In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels. Warning If there are no nodes that fit this description, the virtual machine is not scheduled. Example VM manifest metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 # ... 8.12.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1 . If there is no such pod running on any node, the VM is not scheduled. If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2 . However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 8.12.2.2.3. Example: VM node placement with node affinity In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2 . The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled. If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value . However, if all candidate nodes have this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 8.12.2.2.4. Example: VM node placement with tolerations In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations , it can schedule onto the tainted nodes. Note A virtual machine that tolerates a taint is not required to schedule onto a node with that taint. Example VM manifest metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" # ... 8.12.2.3. Additional resources Specifying nodes for virtualization components Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules 8.12.3. Configuring the default CPU model Use the defaultCPUModel setting in the HyperConverged custom resource (CR) to define a cluster-wide default CPU model. The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster. If the VM does not have a defined CPU model: The defaultCPUModel is automatically set using the CPU model defined at the cluster-wide level. If both the VM and the cluster have a defined CPU model: The VM's CPU model takes precedence. If neither the VM nor the cluster have a defined CPU model: The host-model is automatically set using the CPU model defined at the host level. 8.12.3.1. Configuring the default CPU model Configure the defaultCPUModel by updating the HyperConverged custom resource (CR). You can change the defaultCPUModel while OpenShift Virtualization is running. Note The defaultCPUModel is case sensitive. Prerequisites Install the OpenShift CLI (oc). Procedure Open the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the defaultCPUModel field to the CR and set the value to the name of a CPU model that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: "EPYC" Apply the YAML file to your cluster. 8.12.4. Using UEFI mode for virtual machines You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode. 8.12.4.1. About UEFI mode for virtual machines Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times. It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer. 8.12.4.2. Booting virtual machines in UEFI mode You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest. Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode: Booting in UEFI mode with secure boot active apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 # ... 1 OpenShift Virtualization requires System Management Mode ( SMM ) to be enabled for Secure Boot in UEFI mode to occur. 2 OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot. Apply the manifest to your cluster by running the following command: USD oc create -f <file_name>.yaml 8.12.4.3. Enabling persistent EFI You can enable EFI persistence in a VM by configuring an RWX storage class at the cluster level and adjusting the settings in the EFI section of the VM. Prerequisites You must have cluster administrator privileges. You must have a storage class that supports RWX access mode and FS volume mode. Procedure Enable the VMPersistentState feature gate by running the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/featureGates/VMPersistentState", "value": true}]' 8.12.4.4. Configuring VMs with persistent EFI You can configure a VM to have EFI persistence enabled by editing its manifest file. Prerequisites VMPersistentState feature gate enabled. Procedure Edit the VM manifest file and save to apply settings. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true # ... 8.12.5. Configuring PXE booting for virtual machines PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host. 8.12.5.1. PXE booting with a specified MAC address As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server. Prerequisites The PXE server must be connected to the same VLAN as the bridge. Procedure Configure a PXE network on the cluster: Create the network attachment definition file for PXE network pxe-net-conf : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { "cniVersion": "0.3.1", "name": "pxe-net-conf", 2 "type": "bridge", 3 "bridge": "bridge-interface", 4 "macspoofchk": false, 5 "vlan": 100, 6 "disableContainerInterface": true, "preserveDefaultVlan": false 7 } 1 The name for the NetworkAttachmentDefinition object. 2 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 3 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin. 4 The name of the Linux bridge configured on the node. 5 Optional: A flag to enable the MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. 6 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. 7 Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true . Create the network attachment definition by using the file you created in the step: USD oc create -f pxe-net-conf.yaml Edit the virtual machine instance configuration file to include the details of the interface and network. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net> : interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 Note Boot order is global for interfaces and disks. Assign a boot device number to the disk to ensure proper booting after operating system provisioning. Set the disk bootOrder value to 2 : devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf> : networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf Create the virtual machine instance: USD oc create -f vmi-pxe-boot.yaml Example output virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created Wait for the virtual machine instance to run: USD oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running View the virtual machine instance using VNC: USD virtctl vnc vmi-pxe-boot Watch the boot screen to verify that the PXE boot is successful. Log in to the virtual machine instance: USD virtctl console vmi-pxe-boot Verification Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0 , got an IP address from Red Hat OpenShift Service on AWS. USD ip addr Example output ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff 8.12.5.2. OpenShift Virtualization networking glossary The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. UserDefinedNetwork (UDN) A namespace-scoped CRD introduced by the user-defined network API that can be used to create a tenant network that isolates the tenant namespace from other namespaces. ClusterUserDefinedNetwork (CUDN) A cluster-scoped CRD introduced by the user-defined network API that cluster administrators can use to create a shared network across multiple namespaces. 8.12.6. Scheduling virtual machines You can schedule a virtual machine (VM) on a node by ensuring that the VM's CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node. 8.12.6.1. Policy attributes You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node. Policy attribute Description force The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM's CPU. require Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM's CPU or the hypervisor must be able to emulate the supported CPU model. optional The VM is added to a node if that VM is supported by the host's physical machine CPU. disable The VM cannot be scheduled with CPU node discovery. forbid The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. 8.12.6.2. Setting a policy attribute and CPU feature You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor. Procedure Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM): apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2 1 Name of the CPU feature for the VM. 2 Policy attribute for the VM. 8.12.6.3. Scheduling virtual machines with the supported CPU model You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported. Procedure Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1 1 CPU model for the VM. 8.12.6.4. Scheduling virtual machines with the host model When the CPU model for a virtual machine (VM) is set to host-model , the VM inherits the CPU model of the node where it is scheduled. Procedure Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine: apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1 1 The VM that inherits the CPU model of the node where it is scheduled. 8.12.6.5. Scheduling virtual machines with a custom scheduler You can use a custom scheduler to schedule a virtual machine (VM) on a node. Prerequisites A secondary scheduler is configured for your cluster. Procedure Add the custom scheduler to the VM configuration by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: runStrategy: Always template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio # ... 1 The name of the custom scheduler. If the schedulerName value does not match an existing scheduler, the virt-launcher pod stays in a Pending state until the specified scheduler is found. Verification Verify that the VM is using the custom scheduler specified in the VirtualMachine manifest by checking the virt-launcher pod events: View the list of pods in your cluster by entering the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m Run the following command to display the pod events: USD oc describe pod virt-launcher-vm-fedora-dpc87 The value of the From field in the output verifies that the scheduler name matches the custom scheduler specified in the VirtualMachine manifest: Example output [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...] 8.12.7. About high availability for virtual machines You can enable high availability for virtual machines (VMs) by configuring remediating nodes. You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 8.12.8. Virtual machine control plane tuning OpenShift Virtualization offers the following tuning options at the control-plane level: The highBurst profile, which uses fixed QPS and burst rates, to create hundreds of virtual machines (VMs) in one batch Migration setting adjustment based on workload type 8.12.8.1. Configuring a highBurst profile Use the highBurst profile to create and maintain a large number of virtual machines (VMs) in one cluster. Procedure Apply the following patch to enable the highBurst tuning policy profile: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]' Verification Run the following command to verify the highBurst tuning policy profile is enabled: USD oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range USDconfig, \ USDvalue := .spec.configuration}} {{if eq USDconfig "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{"\n"}} 8.13. VM disks 8.13.1. Hot-plugging VM disks You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI). Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks. A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Note Each VM has a virtio-scsi controller so that hot plugged disks can use the scsi bus. The virtio-scsi controller overcomes the limitations of virtio while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks. Regular virtio is not available for hot plugged disks because it is not scalable. Each virtio disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance. Therefore, slots might not be available on demand. 8.13.1.1. Hot plugging and hot unplugging a disk by using the web console You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the Red Hat OpenShift Service on AWS web console. The hot plugged disk remains attached to the VM until you unplug it. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Prerequisites You must have a data volume or persistent volume claim (PVC) available for hot plugging. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a running VM to view its details. On the VirtualMachine details page, click Configuration Disks . Add a hot plugged disk: Click Add disk . In the Add disk (hot plugged) window, select the disk from the Source list and click Save . Optional: Unplug a hot plugged disk: Click the Options menu beside the disk and select Detach . Click Detach . Optional: Make a hot plugged disk persistent: Click the Options menu beside the disk and select Make persistent . Reboot the VM to apply the change. 8.13.1.2. Hot plugging and hot unplugging a disk by using the command line You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Prerequisites You must have at least one data volume or persistent volume claim (PVC) available for hot plugging. Procedure Hot plug a disk by running the following command: USD virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>] Use the optional --persist flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot plug or hot unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances. The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC. Hot unplug a disk by running the following command: USD virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> 8.13.2. Expanding virtual machine disks You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes. You cannot reduce the size of a VM disk. 8.13.2.1. Increasing a VM disk size by expanding the PVC of the disk You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. To specify the increased PVC volume, you can use the web console with the VM running. Alternatively, you can edit the PVC manifest in the CLI. Note If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead. 8.13.2.1.1. Expanding a VM disk PVC in the web console You can increase the size of a VM disk PVC in the web console without leaving the VirtualMachines page and with the VM running. Procedure In the Administrator or Virtualization perspective, open the VirtualMachines page. Select the running VM to open its Details page. Select the Configuration tab and click Storage . Click the options menu to the disk you want to expand. Select the Edit option. The Edit disk dialog opens. In the PersistentVolumeClaim size field, enter the desired size. Click Save . Note You can enter any value greater than the current one. However, if the new value exceeds the available size, an error is displayed. 8.13.2.1.2. Expanding a VM disk PVC by editing its manifest Procedure Edit the PersistentVolumeClaim manifest of the VM disk that you want to expand: USD oc edit pvc <pvc_name> Update the disk size: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1 # ... 1 Specify the new disk size. Additional resources for volume expansion Extending a basic volume in Windows Extending an existing file system partition without destroying data in Red Hat Enterprise Linux Extending a logical volume and its file system online in Red Hat Enterprise Linux 8.13.2.2. Expanding available virtual storage by adding blank data volumes You can expand the available storage of a virtual machine (VM) by adding blank data volumes. Prerequisites You must have at least one persistent volume. Procedure Create a DataVolume manifest as shown in the following example: Example DataVolume manifest apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: "<storage_class>" 2 1 Specify the amount of available space requested for the data volume. 2 Optional: If you do not specify a storage class, the default storage class is used. Create the data volume by running the following command: USD oc create -f <blank-image-datavolume>.yaml Additional resources for data volumes Configuring preallocation mode for data volumes Managing data volume annotations 8.13.3. Migrating VM disks to a different storage class You can migrate one or more virtual disks to a different storage class without stopping your virtual machine (VM) or virtual machine instance (VMI). 8.13.3.1. Migrating VM disks to a different storage class by using the web console You can migrate one or more disks attached to a virtual machine (VM) to a different storage class by using the Red Hat OpenShift Service on AWS web console. When performing this action on a running VM, the operation of the VM is not interrupted and the data on the migrated disks remains accessible. Note With the OpenShift Virtualization Operator, you can only start storage class migration for one VM at the time and the VM must be running. If you need to migrate more VMs at once or migrate a mix of running and stopped VMs, consider using the Migration Toolkit for Containers (MTC) . Migration Toolkit for Containers is not part of OpenShift Virtualization and requires separate installation. Important Storage class migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You must have a data volume or a persistent volume claim (PVC) available for storage class migration. The cluster must have a node available for live migration. As part of the storage class migration, the VM is live migrated to a different node. The VM must be running. Procedure Navigate to Virtualization VirtualMachines in the web console. Click the Options menu beside the virtual machine and select Migration Storage . You can also access this option from the VirtualMachine details page by selecting Actions Migration Storage . On the Migration details page, choose whether to migrate the entire VM storage or selected volumes only. If you click Selected volumes , select any disks that you intend to migrate. Click to proceed. From the list of available options on the Destination StorageClass page, select the storage class to migrate to. Click to proceed. On the Review page, review the list of affected disks and the target storage class. To start the migration, click Migrate VirtualMachine storage . Stay on the Migrate VirtualMachine storage page to watch the progress and wait for the confirmation that the migration completed successfully. Verification From the VirtualMachine details page, navigate to Configuration Storage . Verify that all disks have the expected storage class listed in the Storage class column. | [
"yum install -y qemu-guest-agent",
"systemctl enable --now qemu-guest-agent",
"oc get vm <vm_name>",
"net start",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"virtctl start <vm> -n <namespace>",
"oc apply -f <vm.yaml>",
"virtctl vnc <vm_name>",
"virtctl vnc <vm_name> -v 4",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/deployVmConsoleProxy\", \"value\": true}]'",
"curl --header \"Authorization: Bearer USD{TOKEN}\" \"https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>\"",
"{ \"token\": \"eyJhb...\" }",
"export VNC_TOKEN=\"<token>\"",
"oc login --token USD{VNC_TOKEN}",
"virtctl vnc <vm_name> -n <namespace>",
"virtctl delete serviceaccount --namespace \"<namespace>\" \"<vm_name>-vnc-access\"",
"kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --user=\"USD{USER_NAME}\"",
"kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --serviceaccount=\"USD{SERVICE_ACCOUNT_NAME}\"",
"virtctl console <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3",
"oc create -f <manifest_file>.yaml",
"virtctl start vm example-vm -n example-namespace",
"oc describe vm example-vm -n example-namespace",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3",
"oc create -f <manifest_file>.yaml",
"virtctl start vm example-vm -n example-namespace",
"oc describe vm example-vm -n example-namespace",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys",
"virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1",
"virtctl -n my-namespace ssh cloud-user@example-vm -i my-key",
"Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p",
"ssh <user>@vm/<vm_name>.<namespace>",
"virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1",
"virtctl expose vm example-vm --name example-service --type NodePort --port 22",
"oc get service",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1",
"apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000",
"oc create -f example-service.yaml",
"oc get service -n example-namespace",
"ssh <user_name>@<ip_address> -p <port> 1",
"oc describe vm <vm_name> -n <namespace>",
"Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default",
"ssh <user_name>@<ip_address> -i <ssh_key>",
"ssh [email protected] -i ~/.ssh/id_rsa_cloud-user",
"oc edit vm <vm_name>",
"oc apply vm <vm_name> -n <namespace>",
"oc edit vm <vm_name> -n <namespace>",
"disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default",
"oc delete vm <vm_name>",
"apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3",
"oc create -f example-export.yaml",
"oc get vmexport example-export -o yaml",
"apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export",
"oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1",
"oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1",
"oc get vmexport <export_name> -o yaml",
"apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: # links: external: # manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: # manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export",
"curl --cacert cacert.crt <secret_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"",
"curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"",
"curl --cacert cacert.crt <all_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"",
"curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"",
"oc get vmis -A",
"oc delete vmi <vmi_name>",
"kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name>",
"oc edit vm <vm_name> -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107",
"oc apply -f windows11-customize-run.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: runStrategy: Halted template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1",
"metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2",
"metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname",
"metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value",
"metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: \"EPYC\"",
"apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2",
"oc create -f <file_name>.yaml",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/VMPersistentState\", \"value\": true}]'",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", 2 \"type\": \"bridge\", 3 \"bridge\": \"bridge-interface\", 4 \"macspoofchk\": false, 5 \"vlan\": 100, 6 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 7 }",
"oc create -f pxe-net-conf.yaml",
"interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1",
"devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2",
"networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf",
"oc create -f vmi-pxe-boot.yaml",
"virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created",
"oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running",
"virtctl vnc vmi-pxe-boot",
"virtctl console vmi-pxe-boot",
"ip addr",
"3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1",
"apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: runStrategy: Always template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio",
"oc get pods",
"NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m",
"oc describe pod virt-launcher-vm-fedora-dpc87",
"[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/tuningPolicy\", \"value\": \"highBurst\"}]'",
"oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged -n openshift-cnv -o go-template --template='{{range USDconfig, USDvalue := .spec.configuration}} {{if eq USDconfig \"apiConfiguration\" \"webhookConfiguration\" \"controllerConfiguration\" \"handlerConfiguration\"}} {{\"\\n\"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{\"\\n\"}}",
"virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]",
"virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>",
"oc edit pvc <pvc_name>",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: \"<storage_class>\" 2",
"oc create -f <blank-image-datavolume>.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/managing-vms |
Chapter 4. Certificate types and descriptions | Chapter 4. Certificate types and descriptions 4.1. User-provided certificates for the API server 4.1.1. Purpose The API server is accessible by clients external to the cluster at api.<cluster_name>.<base_domain> . You might want clients to access the API server at a different hostname or without the need to distribute the cluster-managed certificate authority (CA) certificates to the clients. The administrator must set a custom default certificate to be used by the API server when serving content. 4.1.2. Location The user-provided certificates must be provided in a kubernetes.io/tls type Secret in the openshift-config namespace. Update the API server cluster configuration, the apiserver/cluster resource, to enable the use of the user-provided certificate. 4.1.3. Management User-provided certificates are managed by the user. 4.1.4. Expiration API server client certificate expiration is less than five minutes. User-provided certificates are managed by the user. 4.1.5. Customization Update the secret containing the user-managed certificate as needed. Additional resources Adding API server certificates 4.2. Proxy certificates 4.2.1. Purpose Proxy certificates allow users to specify one or more custom certificate authority (CA) certificates used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Additional resources Configuring the cluster-wide proxy 4.2.2. Managing proxy certificates during installation The additionalTrustBundle value of the installer configuration is used to specify any proxy-trusted CA certificates during installation. For example: USD cat install-config.yaml Example output ... proxy: httpProxy: http://<https://username:[email protected]:123/> httpsProxy: https://<https://username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 4.2.3. Location The user-provided trust bundle is represented as a config map. The config map is mounted into the file system of platform components that make egress HTTPS calls. Typically, Operators mount the config map to /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem , but this is not required by the proxy. A proxy can modify or inspect the HTTPS connection. In either case, the proxy must generate and sign a new certificate for the connection. Complete proxy support means connecting to the specified proxy and trusting any signatures it has generated. Therefore, it is necessary to let the user specify a trusted root, such that any certificate chain connected to that trusted root is also trusted. If using the RHCOS trust bundle, place CA certificates in /etc/pki/ca-trust/source/anchors . See Using shared system certificates in the Red Hat Enterprise Linux documentation for more information. 4.2.4. Expiration The user sets the expiration term of the user-provided trust bundle. The default expiration term is defined by the CA certificate itself. It is up to the CA administrator to configure this for the certificate before it can be used by OpenShift Container Platform or RHCOS. Note Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. 4.2.5. Services By default, all platform components that make egress HTTPS calls will use the RHCOS trust bundle. If trustedCA is defined, it will also be used. Any service that is running on the RHCOS node is able to use the trust bundle of the node. 4.2.6. Management These certificates are managed by the system and not the user. 4.2.7. Customization Updating the user-provided trust bundle consists of either: updating the PEM-encoded certificates in the config map referenced by trustedCA, or creating a config map in the namespace openshift-config that contains the new trust bundle and updating trustedCA to reference the name of the new config map. The mechanism for writing CA certificates to the RHCOS trust bundle is exactly the same as writing any other file to RHCOS, which is done through the use of machine configs. When the Machine Config Operator (MCO) applies the new machine config that contains the new CA certificates, the node is rebooted. During the boot, the service coreos-update-ca-trust.service runs on the RHCOS nodes, which automatically update the trust bundle with the new CA certificates. For example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt The trust store of machines must also support updating the trust store of nodes. 4.2.8. Renewal There are no Operators that can auto-renew certificates on the RHCOS nodes. Note Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. 4.3. Service CA certificates 4.3.1. Purpose service-ca is an Operator that creates a self-signed CA when an OpenShift Container Platform cluster is deployed. 4.3.2. Expiration A custom expiration term is not supported. The self-signed CA is stored in a secret with qualified name service-ca/signing-key in fields tls.crt (certificate(s)), tls.key (private key), and ca-bundle.crt (CA bundle). Other services can request a service serving certificate by annotating a service resource with service.beta.openshift.io/serving-cert-secret-name: <secret name> . In response, the Operator generates a new certificate, as tls.crt , and private key, as tls.key to the named secret. The certificate is valid for two years. Other services can request that the CA bundle for the service CA be injected into API service or config map resources by annotating with service.beta.openshift.io/inject-cabundle: true to support validating certificates generated from the service CA. In response, the Operator writes its current CA bundle to the CABundle field of an API service or as service-ca.crt to a config map. As of OpenShift Container Platform 4.3.5, automated rotation is supported and is backported to some 4.2.z and 4.3.z releases. For any release supporting automated rotation, the service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA. The service CA expiration of 26 months is longer than the expected upgrade interval for a supported OpenShift Container Platform cluster, such that non-control plane consumers of service CA certificates will be refreshed after CA rotation and prior to the expiration of the pre-rotation CA. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the Pods in the cluster are restarted, which ensures that Pods are using service serving certificates issued by the new service CA. 4.3.3. Management These certificates are managed by the system and not the user. 4.3.4. Services Services that use service CA certificates include: cluster-autoscaler-operator cluster-monitoring-operator cluster-authentication-operator cluster-image-registry-operator cluster-ingress-operator cluster-kube-apiserver-operator cluster-kube-controller-manager-operator cluster-kube-scheduler-operator cluster-networking-operator cluster-openshift-apiserver-operator cluster-openshift-controller-manager-operator cluster-samples-operator machine-config-operator console-operator insights-operator machine-api-operator operator-lifecycle-manager This is not a comprehensive list. Additional resources Manually rotate service serving certificates Securing service traffic using service serving certificate secrets 4.4. Node certificates 4.4.1. Purpose Node certificates are signed by the cluster; they come from a certificate authority (CA) that is generated by the bootstrap process. Once the cluster is installed, the node certificates are auto-rotated. 4.4.2. Management These certificates are managed by the system and not the user. Additional resources Working with nodes 4.5. Bootstrap certificates 4.5.1. Purpose The kubelet, in OpenShift Container Platform 4 and later, uses the bootstrap certificate located in /etc/kubernetes/kubeconfig to initially bootstrap. This is followed by the bootstrap initialization process and authorization of the kubelet to create a CSR . In that process, the kubelet generates a CSR while communicating over the bootstrap channel. The controller manager signs the CSR, resulting in a certificate that the kubelet manages. 4.5.2. Management These certificates are managed by the system and not the user. 4.5.3. Expiration This bootstrap CA is valid for 10 years. The kubelet-managed certificate is valid for one year and rotates automatically at around the 80 percent mark of that one year. 4.5.4. Customization You cannot customize the bootstrap certificates. 4.6. etcd certificates 4.6.1. Purpose etcd certificates are signed by the etcd-signer; they come from a certificate authority (CA) that is generated by the bootstrap process. 4.6.2. Expiration The CA certificates are valid for 10 years. The peer, client, and server certificates are valid for three years. 4.6.3. Management These certificates are managed by the system and not the user. 4.6.4. Services etcd certificates are used for encrypted communication between etcd member peers, as well as encrypted client traffic. The following certificates are generated and used by etcd and other processes that communicate with etcd: Peer certificates: Used for communication between etcd members. Client certificates: Used for encrypted server-client communication. Client certificates are currently used by the API server only, and no other service should connect to etcd directly except for the proxy. Client secrets ( etcd-client , etcd-metric-client , etcd-metric-signer , and etcd-signer ) are added to the openshift-config , openshift-monitoring , and openshift-kube-apiserver namespaces. Server certificates: Used by the etcd server for authenticating client requests. Metric certificates: All metric consumers connect to proxy with metric-client certificates. Additional resources Restoring to a cluster state 4.7. OLM certificates 4.7.1. Management All certificates for OpenShift Lifecycle Manager (OLM) components ( olm-operator , catalog-operator , packageserver , and marketplace-operator ) are managed by the system. When installing Operators that include webhooks or API services in their ClusterServiceVersion (CSV) object, OLM creates and rotates the certificates for these resources. Certificates for resources in the openshift-operator-lifecycle-manager namespace are managed by OLM. OLM will not update the certificates of Operators that it manages in proxy environments. These certificates must be managed by the user using the subscription config. 4.8. User-provided certificates for default ingress 4.8.1. Purpose Applications are usually exposed at <route_name>.apps.<cluster_name>.<base_domain> . The <cluster_name> and <base_domain> come from the installation config file. <route_name> is the host field of the route, if specified, or the route name. For example, hello-openshift-default.apps.username.devcluster.openshift.com . hello-openshift is the name of the route and the route is in the default namespace. You might want clients to access the applications without the need to distribute the cluster-managed CA certificates to the clients. The administrator must set a custom default certificate when serving application content. Warning The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use operator-generated default certificates in production clusters. 4.8.2. Location The user-provided certificates must be provided in a tls type Secret resource in the openshift-ingress namespace. Update the IngressController CR in the openshift-ingress-operator namespace to enable the use of the user-provided certificate. For more information on this process, see Setting a custom default certificate . 4.8.3. Management User-provided certificates are managed by the user. 4.8.4. Expiration User-provided certificates are managed by the user. 4.8.5. Services Applications deployed on the cluster use user-provided certificates for default ingress. 4.8.6. Customization Update the secret containing the user-managed certificate as needed. Additional resources Replacing the default ingress certificate 4.9. Ingress certificates 4.9.1. Purpose The Ingress Operator uses certificates for: Securing access to metrics for Prometheus. Securing access to routes. 4.9.2. Location To secure access to Ingress Operator and Ingress Controller metrics, the Ingress Operator uses service serving certificates. The Operator requests a certificate from the service-ca controller for its own metrics, and the service-ca controller puts the certificate in a secret named metrics-tls in the openshift-ingress-operator namespace. Additionally, the Ingress Operator requests a certificate for each Ingress Controller, and the service-ca controller puts the certificate in a secret named router-metrics-certs-<name> , where <name> is the name of the Ingress Controller, in the openshift-ingress namespace. Each Ingress Controller has a default certificate that it uses for secured routes that do not specify their own certificates. Unless you specify a custom certificate, the Operator uses a self-signed certificate by default. The Operator uses its own self-signed signing certificate to sign any default certificate that it generates. The Operator generates this signing certificate and puts it in a secret named router-ca in the openshift-ingress-operator namespace. When the Operator generates a default certificate, it puts the default certificate in a secret named router-certs-<name> (where <name> is the name of the Ingress Controller) in the openshift-ingress namespace. Warning The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. 4.9.3. Workflow Figure 4.1. Custom certificate workflow Figure 4.2. Default certificate workflow An empty defaultCertificate field causes the Ingress Operator to use its self-signed CA to generate a serving certificate for the specified domain. The default CA certificate and key generated by the Ingress Operator. Used to sign Operator-generated default serving certificates. In the default workflow, the wildcard default serving certificate, created by the Ingress Operator and signed using the generated default CA certificate. In the custom workflow, this is the user-provided certificate. The router deployment. Uses the certificate in secrets/router-certs-default as its default front-end server certificate. In the default workflow, the contents of the wildcard default serving certificate (public and private parts) are copied here to enable OAuth integration. In the custom workflow, this is the user-provided certificate. The public (certificate) part of the default serving certificate. Replaces the configmaps/router-ca resource. The user updates the cluster proxy configuration with the CA certificate that signed the ingresscontroller serving certificate. This enables components like auth , console , and the registry to trust the serving certificate. The cluster-wide trusted CA bundle containing the combined Red Hat Enterprise Linux CoreOS (RHCOS) and user-provided CA bundles or an RHCOS-only bundle if a user bundle is not provided. The custom CA certificate bundle, which instructs other components (for example, auth and console ) to trust an ingresscontroller configured with a custom certificate. The trustedCA field is used to reference the user-provided CA bundle. The Cluster Network Operator injects the trusted CA bundle into the proxy-ca config map. OpenShift Container Platform 4.7 and newer use default-ingress-cert . 4.9.4. Expiration The expiration terms for the Ingress Operator's certificates are as follows: The expiration date for metrics certificates that the service-ca controller creates is two years after the date of creation. The expiration date for the Operator's signing certificate is two years after the date of creation. The expiration date for default certificates that the Operator generates is two years after the date of creation. You cannot specify custom expiration terms on certificates that the Ingress Operator or service-ca controller creates. You cannot specify expiration terms when installing OpenShift Container Platform for certificates that the Ingress Operator or service-ca controller creates. 4.9.5. Services Prometheus uses the certificates that secure metrics. The Ingress Operator uses its signing certificate to sign default certificates that it generates for Ingress Controllers for which you do not set custom default certificates. Cluster components that use secured routes may use the default Ingress Controller's default certificate. Ingress to the cluster via a secured route uses the default certificate of the Ingress Controller by which the route is accessed unless the route specifies its own certificate. 4.9.6. Management Ingress certificates are managed by the user. See Replacing the default ingress certificate for more information. 4.9.7. Renewal The service-ca controller automatically rotates the certificates that it issues. However, it is possible to use oc delete secret <secret> to manually rotate service serving certificates. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure. 4.10. Monitoring and OpenShift Logging Operator component certificates 4.10.1. Expiration Monitoring components secure their traffic with service CA certificates. These certificates are valid for 2 years and are replaced automatically on rotation of the service CA, which is every 13 months. If the certificate lives in the openshift-monitoring or openshift-logging namespace, it is system managed and rotated automatically. 4.10.2. Management These certificates are managed by the system and not the user. 4.11. Control plane certificates 4.11.1. Location Control plane certificates are included in these namespaces: openshift-config-managed openshift-kube-apiserver openshift-kube-apiserver-operator openshift-kube-controller-manager openshift-kube-controller-manager-operator openshift-kube-scheduler 4.11.2. Management Control plane certificates are managed by the system and rotated automatically. In the rare case that your control plane certificates have expired, see Recovering from expired control plane certificates . | [
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"cat install-config.yaml",
"proxy: httpProxy: http://<https://username:[email protected]:123/> httpsProxy: https://<https://username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/security_and_compliance/certificate-types-and-descriptions |
Chapter 1. Overview of Red Hat Insights for OpenShift Vulnerability dashboard service | Chapter 1. Overview of Red Hat Insights for OpenShift Vulnerability dashboard service The Red Hat Insights for OpenShift Vulnerability Dashboard service provides information about the exposure of your Insights for OpenShift cluster infrastructure to Common Vulnerabilities and Exposures (CVEs). CVEs are security exposures or flaws identified in publicly-released software packages. Insights for OpenShift repeatedly analyzes data collected by the Insights Operator about your clusters and images, as well as CVES. The results from the analysis are used to provide information about how vulnerable your OpenShift clusters are to CVEs. You can view analysis results in the Insights for OpenShift Vulnerability Dashboard, which is located in the Red Hat Hybrid Cloud Console. For more information about remote health monitoring, see About remote health monitoring . Using the Red Hat Insights for OpenShift vulnerability dashboard service, you can make assessments and perform comprehensive monitoring of the exposure of your clusters and images to CVEs, enabling you to better understand and prioritize the risks posed to your organization. The Red Hat Insights for OpenShift vulnerability dashboard provides CVE data for: Your Insights for OpenShift components of Openshift running as containers (e.g. operators) are covered, but the operating system running under the cluster is not covered Images from catalog.redhat.com Note The Red Hat Insights for OpenShift vulnerability dashboard service provides CVE information about workloads running on the cluster if the workload is a known image to catalog.redhat.com. The service does not provide CVE data about custom workloads (completely custom images or images using catalog.redhat.com images as a base). 1.1. Requirements and prerequisites to view data in the Red Hat Insights for OpenShift vulnerability dashboard For your cluster data to be visible in the Insights Vulnerability Dashboard service, your clusters must be connected and active, which means that your cluster must be registered to Red Hat OpenShift Cluster Manager. Registering to Red Hat OpenShift Cluster Manager enables remote health monitoring by default, allowing Telemetry and Insights Operator data to be sent to the vulnerability dashboard in the past 30 days. For more information about Telemetry and the Insights Operator, see About remote health monitoring . In some cases, your clusters might be disconnected or stop sending data. For example, a cluster in an air-gapped environment is a disconnected cluster. In this example you might need to register or upload your data in a different way than using the Red Hat OpenShift Cluster Manager web console. To learn more about registering a cluster, and what to do if you have disconnected clusters, see Registering OpenShift Container Platform clusters to OpenShift Cluster Manager . You can access the dashboard by navigating to OpenShift > Vulnerability Dashboard . From there you can expand the navigation panel to view CVEs and Clusters. You can assess the CVEs and look at vulnerable (also referred to as exposed) clusters and exposed images. Clicking on one of the options, CVEs or Clusters takes you to the following: The CVEs list view (shows a detailed view of CVEs, where you can view, sort, and filter to get more details about CVEs for exposed clusters and exposed images within those clusters) The Clusters list view (shows a detailed view of vulnerable clusters, where you can view, sort, and filter to get more details about exposed clusters and exposed images within those clusters) Note You will see a list of your clusters if the Insights Operator has sent information within the past 30 days. Important Red Hat Insights for OpenShift does not determine whether any of your connected Insights for OpenShift clusters have been exploited. The Red Hat Insights for OpenShift vulnerability dashboard identifies CVEs that might pose a risk to clusters and images in your Red Hat OpenShift Container Platform environment. Additional resources Getting Started with Red Hat Insights for OpenShift OpenShift Cluster Manager Information about CVEs at mitre.org 1.2. About Common Vulnerabilities and Exposures (CVEs) in Red Hat Insights for OpenShift You can use Red Hat Insights for OpenShift to identify Common Vulnerabilities and Exposures (CVEs) affecting your Insights for OpenShift clusters, and to help you understand the potential risks to your clusters. You can use the visibility of CVEs affecting your Insights for OpenShift clusters to prioritize your most critical issues. Important Only CVEs with Red Hat-issued security advisories (RHSAs) are included in the Red Hat Insights for OpenShift vulnerability dashboard. Additional resources What is a CVE Explaining Red Hat Errata 1.3. Data collection and security Red Hat Insights does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Insights data collection and controls. Additional resources About remote health monitoring Showing data collected by remote health monitoring Opting out of remote health reporting | null | https://docs.redhat.com/en/documentation/red_hat_insights_for_openshift/1-latest/html/assessing_security_vulnerabilities_in_your_openshift_cluster_using_red_hat_insights/assembly_vuln-overview |
Chapter 16. command | Chapter 16. command This chapter describes the commands under the command command. 16.1. command list List recognized commands by group Usage: Table 16.1. Command arguments Value Summary -h, --help Show this help message and exit --group <group-keyword> Show commands filtered by a command group, for example: identity, volume, compute, image, network and other keywords Table 16.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 16.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 16.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 16.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack command list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--group <group-keyword>]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/command |
Chapter 10. Managing user-provisioned infrastructure manually | Chapter 10. Managing user-provisioned infrastructure manually 10.1. Adding compute machines to clusters with user-provisioned infrastructure manually You can add compute machines to a cluster on user-provisioned infrastructure either as part of the installation process or after installation. The post-installation process requires some of the same configuration files and parameters that were used during installation. 10.1.1. Adding compute machines to Amazon Web Services To add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS), see Adding compute machines to AWS by using CloudFormation templates . 10.1.2. Adding compute machines to Microsoft Azure To add more compute machines to your OpenShift Container Platform cluster on Microsoft Azure, see Creating additional worker machines in Azure . 10.1.3. Adding compute machines to Azure Stack Hub To add more compute machines to your OpenShift Container Platform cluster on Azure Stack Hub, see Creating additional worker machines in Azure Stack Hub . 10.1.4. Adding compute machines to Google Cloud Platform To add more compute machines to your OpenShift Container Platform cluster on Google Cloud Platform (GCP), see Creating additional worker machines in GCP . 10.1.5. Adding compute machines to vSphere You can use compute machine sets to automate the creation of additional compute machines for your OpenShift Container Platform cluster on vSphere. To manually add more compute machines to your cluster, see Adding compute machines to vSphere manually . 10.1.6. Adding compute machines to bare metal To add more compute machines to your OpenShift Container Platform cluster on bare metal, see Adding compute machines to bare metal . 10.2. Adding compute machines to AWS by using CloudFormation templates You can add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS) that you created by using the sample CloudFormation templates. 10.2.1. Prerequisites You installed your cluster on AWS by using the provided AWS CloudFormation templates . You have the JSON file and CloudFormation template that you used to create the compute machines during cluster installation. If you do not have these files, you must recreate them by following the instructions in the installation procedure . 10.2.2. Adding more compute machines to your AWS cluster by using CloudFormation templates You can add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS) that you created by using the sample CloudFormation templates. Important The CloudFormation template creates a stack that represents one compute machine. You must create a stack for each compute machine. Note If you do not use the provided CloudFormation template to create your compute nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You installed an OpenShift Container Platform cluster by using CloudFormation templates and have access to the JSON file and CloudFormation template that you used to create the compute machines during cluster installation. You installed the AWS CLI. Procedure Create another compute stack. Launch the template: USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-workers . You must provide the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create compute stacks until you have created enough compute machines for your cluster. 10.2.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.3. Adding compute machines to vSphere manually You can add more compute machines to your OpenShift Container Platform cluster on VMware vSphere manually. Note You can also use compute machine sets to automate the creation of additional VMware vSphere compute machines for your cluster. 10.3.1. Prerequisites You installed a cluster on vSphere . You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . Important If you do not have access to the Red Hat Enterprise Linux CoreOS (RHCOS) images that were used to create your cluster, you can add more compute machines to your OpenShift Container Platform cluster with newer versions of Red Hat Enterprise Linux CoreOS (RHCOS) images. For instructions, see Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ . 10.3.2. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 10.3.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.4. Adding compute machines to bare metal You can add more compute machines to your OpenShift Container Platform cluster on bare metal. 10.4.1. Prerequisites You installed a cluster on bare metal . You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . If a DHCP server is available for your user-provisioned infrastructure, you have added the details for the additional compute machines to your DHCP server configuration. This includes a persistent IP address, DNS server information, and a hostname for each machine. You have updated your DNS configuration to include the record name and IP address of each compute machine that you are adding. You have validated that DNS lookup and reverse DNS lookup resolve correctly. Important If you do not have access to the Red Hat Enterprise Linux CoreOS (RHCOS) images that were used to create your cluster, you can add more compute machines to your OpenShift Container Platform cluster with newer versions of Red Hat Enterprise Linux CoreOS (RHCOS) images. For instructions, see Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ . 10.4.2. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. Note You must use the same ISO image that you used to install a cluster to deploy all new nodes in a cluster. It is recommended to use the same Ignition config file. The nodes automatically upgrade themselves on the first boot before running the workloads. You can add the nodes before or after the upgrade. 10.4.2.1. Creating more RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Procedure Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 10.4.2.2. Creating more RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE: 1 Specify locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. 2 Specify the location of the initramfs file that you uploaded to your HTTP server. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 10.4.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . | [
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"aws cloudformation describe-stacks --stack-name <name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/machine_management/managing-user-provisioned-infrastructure-manually |
1.3. Setting Up a Typing Break | 1.3. Setting Up a Typing Break Typing for a long period of time can be not only tiring, but it can also increase the risk of serious health problems, such as carpal tunnel syndrome. One way of preventing this is to configure the system to enforce typing breaks. To do so, select System Preferences Keyboard from the panel, click the Typing Break tab, and select the Lock screen to enforce typing break check box. Figure 1.8. Typing Break Properties To increase or decrease the allowed typing time before the break is enforced, click the up or down button to the Work interval lasts label respectively. You can do the same with the Break interval lasts setting to alter the length of the break itself. Finally, select the Allow postponing of breaks check box if you want to be able to delay the break in case you need to finish the work. The changes take effect immediately. Figure 1.9. Taking a break time you reach the time limit, you will be presented with a screen advising you to take a break, and a clock displaying the remaining time. If you have enabled it, the Postpone Break button will be located at the bottom right corner of the screen. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-keyboard-break |
Chapter 7. Reviewing monitoring dashboards | Chapter 7. Reviewing monitoring dashboards OpenShift Container Platform 4.9 provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads. Use the Administrator perspective to access dashboards for the core OpenShift Container Platform components, including the following items: API performance etcd Kubernetes compute resources Kubernetes network resources Prometheus USE method dashboards relating to cluster and node performance Figure 7.1. Example dashboard in the Administrator perspective Use the Developer perspective to access Kubernetes compute resources dashboards that provide the following application metrics for a selected project: CPU usage Memory usage Bandwidth information Packet rate information Figure 7.2. Example dashboard in the Developer perspective Note In the Developer perspective, you can view dashboards for only one project at a time. 7.1. Reviewing monitoring dashboards as a cluster administrator In the Administrator perspective, you can view dashboards relating to core OpenShift Container Platform cluster components. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as etcd and Prometheus dashboards, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. 7.2. Reviewing monitoring dashboards as a developer Use the Developer perspective to view Kubernetes compute resources dashboards of a selected project. Prerequisites You have access to the cluster as a developer or as a user. You have view permissions for the project that you are viewing the dashboard for. Procedure In the Developer perspective in the OpenShift Container Platform web console, navigate to Observe Dashboard . Select a project from the Project: drop-down list. Select a dashboard from the Dashboard drop-down list to see the filtered metrics. Note All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources Monitoring project and application metrics using the Developer perspective 7.3. steps Accessing third-party UIs | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/monitoring/reviewing-monitoring-dashboards |
Chapter 4. Creating the Fujitsu ETERNUS environment file | Chapter 4. Creating the Fujitsu ETERNUS environment file The environment file that you create to configure custom back ends contains the settings for each back end that you want to define. It also contains other settings that are relevant to the deployment of a custom back end. For more information about environment files, see Environment Files in the Advanced Overcloud Customization guide. In addition, the environment file registers the heat template that you created earlier in Chapter 3, Preparing the Fujitsu ETERNUS heat template . The installation and echo commands defined in the heat template run on the appropriate nodes during deployment. The following example environment file contains the necessary sections for defining an ETERNUS device as a Block Storage back end. It also creates the back end definitions for each corresponding XML file orchestrated in Section 3.1, "Creating driver definitions for each Fujitsu ETERNUS back end" , and Section 3.2, "Example Fujitsu ETERNUS heat template" . eternusbackend-env.yaml 1 Define custom settings for all nodes before the core Puppet configuration with NodeExtraConfig . This ensures the following configuration when the Block Storage service deploys on the overcloud: The XML configuration files for each back end are present. The private key is generated. 2 Set the following parameters to false to disable the other back end types: CinderEnableIscsiBackend : other iSCSI back ends. CinderEnableRbdBackend : Red Hat Ceph Storage. CinderEnableNfsBackend : NFS. NovaEnableRbdBackend : ephemeral Red Hat Ceph Storage. 3 Define the Image service image storage settings with the GlanceBackend parameter. The following values are supported: file stores images on /var/lib/glance/images on each Controller node. swift uses the Object Storage service for image storage. cinder uses the Block Storage service for image storage. 4 Define custom settings for all Controller nodes with controllerExtraConfig . The cinder::config::cinder_config class is for the Block Storage service. Director stores these back end settings in the /etc/cinder/cinder.conf file of each node. 5 Configure a back end definition named FJFC with the FJFC/ string, and declare the volume_driver parameter under that back end definition. Set the Fibre Channel ETERNUS driver for the back end with the volume_driver parameter, for example cinder.volume.drivers.fujitsu.eternus_dx.eternus_dx_fc.FJDXFCDriver . 6 Set the path to the XML configuration file that the driver uses for the back end with cinder_eternus_config_file . Orchestrate the creation of /etc/cinder/eternus-fc.xml through the heat template, such as, /home/stack/templates/eternus-temp.yaml . 7 The volume_backend_name is the name that the Block Storage service uses to enable the back end. 8 Configure a new back end definition with the FJISCSI/ string. Set the iSCSI ETERNUS driver for the back end with the volume_driver parameter, for example cinder.volume.drivers.fujitsu.eternus_dx.eternus_dx_iscsi.FJDXISCSIDriver . 9 Set and enable custom back ends with the cinder_user_enabled_backends class. Use this class for user-enabled back ends only, such as those defined in the cinder::config::cinder_config class. 10 Make custom configuration files on the host available to a cinder-volume service running in a container with CinderVolumeOptVolumes . After creating the environment file, you can deploy your configuration. For more information about the environment file /home/stack/templates/eternusbackend-env.yaml , see Chapter 5, Deploying the configured Fujitsu ETERNUS back ends . | [
"resource_registry: OS::TripleO::NodeExtraConfig: /home/stack/templates/eternus-temp.yaml # 1 parameter_defaults: # 2 CinderEnableIscsiBackend: false CinderEnableRbdBackend: false CinderEnableNfsBackend: false NovaEnableRbdBackend: false GlanceBackend: file # 3 controllerExtraConfig: # 4 cinder::config::cinder_config: FJFC/volume_driver: # 5 value: cinder.volume.drivers.fujitsu.eternus_dx.eternus_dx_fc.FJDXFCDriver FJFC/cinder_eternus_config_file: # 6 value: /etc/cinder/eternus-fc.xml FJFC/volume_backend_name: # 7 value: FJFC FJFC/fujitsu_private_key_path: value: /etc/cinder/eternus FJISCSI/volume_driver: # 8 value: cinder.volume.drivers.fujitsu.eternus_dx.eternus_dx_iscsi.FJDXISCSIDriver FJISCSI/cinder_eternus_config_file: value: /etc/cinder/eternus-iscsi.xml FJISCSI/volume_backend_name: value: FJISCSI FJISCSI/fujitsu_private_key_path: value: /etc/cinder/eternus cinder_user_enabled_backends: ['FJFC','FJISCSI'] # 9 CinderVolumeOptVolumes: 10 - /etc/cinder/eternus-iscsi.xml:/etc/cinder/eternus-iscsi.xml:ro - /etc/cinder/eternus-fc.xml:/etc/cinder/eternus-fc.xml:ro - /etc/cinder/eternus:/etc/cinder/eternus:ro ContainerCinderVolumeImage: registry.connect.redhat.com/fujitsu/rhosp16-fujitsu-cinder-volume-161 ContainerImageRegistryLogin: True ContainerImageRegistryCredentials: registry.connect.redhat.com: my-username: my-password registry.redhat.io: my-username: my-password"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/fujitsu_eternus_back_end_guide/envfile |
Chapter 1. Installing a sample instance of Red Hat Single Sign-On | Chapter 1. Installing a sample instance of Red Hat Single Sign-On This section describes how to install and start a Red Hat Single Sign-On server in standalone mode, set up the initial admin user, and log in to the Red Hat Single Sign-On Admin Console. Additional Resources This installation is intended for practice use of Red Hat Single Sign-On. For instructions on installation in a production environment and full details on all product features, see the other guides in the Red Hat Single Sign-On documentation. 1.1. Installing the Red Hat Single Sign-On server For this sample instance of Red Hat Single Sign-On, this procedure involves installation in standalone mode. The server download ZIP file contains the scripts and binaries to run the Red Hat Single Sign-On server. You can install the server on Linux or Windows. Procedure Go to the Red Hat customer portal . Download the Red Hat Single Sign-On Server: rh-sso-7.6.zip Place the file in a directory you choose. Unpack the ZIP file using the appropriate unzip utility, such as unzip, tar, or Expand-Archive. Linux/Unix USD unzip rhsso-7.6.zip or USD tar -xvzf rh-sso-7.6.tar.gz Windows > Expand-Archive -Path 'C:Downloads\rhsso-7.6.zip' -DestinationPath 'C:\Downloads' Return to the Red Hat customer portal . Click the Patches tab. Download the Red Hat Single Sign-On 7.6.11 server patch. Place the downloaded ZIP file in a directory you choose. Go to the root directory of the Red Hat Single Sign-On server. Start the JBoss EAP command line interface. Linux/Unix USD ./bin/jboss-cli.sh Windows > .\bin\jboss-cli.bat Apply the patch. USD patch apply <path-to-zip>/rh-sso-7.6.11-patch.zip 1.2. Enable Java 17 for Red Hat Single Sign-On If Java SE 17 wants to be used to run Red Hat Single Sign-On an extra step is neeeded. The bundled enable-elytron-se17.cli script file should be run to prepare the server. If a version of Java is used this step is not necessary. Prerequisites You saw no errors during the Red Hat Single Sign-On server installation. Procedure Go to the root directory of the Red Hat Single Sign-On server. Run the jboss-cli command passing the enable-elytron-se17.cli script. Linux/Unix USD ./bin/jboss-cli.sh --file=docs/examples/enable-elytron-se17.cli Windows > .\bin\jboss-cli.bat --file=docs\examples\enable-elytron-se17.cli 1.3. Starting the Red Hat Single Sign-On server You start the server on the system where you installed it. Prerequisites You saw no errors during the Red Hat Single Sign-On server installation. Procedure Go to the bin directory of the server distribution. Run the standalone boot script. Linux/Unix USD cd bin USD ./standalone.sh Windows > ...\bin\standalone.bat 1.4. Creating the admin account Before you can use Red Hat Single Sign-On, you need to create an admin account which you use to log in to the Red Hat Single Sign-On admin console. Prerequisites You saw no errors when you started the Red Hat Single Sign-On server. Procedure Open http://localhost:8080/auth in your web browser. The welcome page opens, confirming that the server is running. Welcome page Enter a username and password to create an initial admin user. 1.5. Logging into the admin console After you create the initial admin account, you can log in to the admin console. In this console, you add users and register applications to be secured by Red Hat Single Sign-On. Prerequisites You have an admin account for the admin console. Procedure Click the Administration Console link on the Welcome page or go directly to http://localhost:8080/auth/admin/ (the console URL). Note The Administration Console is generally referred to as the admin console for short in Red Hat Single Sign-On documentation. Enter the username and password you created on the Welcome page to open the admin console . Admin console login screen The initial screen for the admin console appears. Admin console steps Now that you can log into the admin console, you can begin creating realms where administrators can create users and give them access to applications. For more details, see Creating a realm and a user . | [
"unzip rhsso-7.6.zip or tar -xvzf rh-sso-7.6.tar.gz",
"> Expand-Archive -Path 'C:Downloads\\rhsso-7.6.zip' -DestinationPath 'C:\\Downloads'",
"./bin/jboss-cli.sh",
"> .\\bin\\jboss-cli.bat",
"patch apply <path-to-zip>/rh-sso-7.6.11-patch.zip",
"./bin/jboss-cli.sh --file=docs/examples/enable-elytron-se17.cli",
"> .\\bin\\jboss-cli.bat --file=docs\\examples\\enable-elytron-se17.cli",
"cd bin ./standalone.sh",
"> ...\\bin\\standalone.bat"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/getting_started_guide/installing-standalone_ |
Chapter 15. Unsubscribing developers from a service | Chapter 15. Unsubscribing developers from a service As an admin, you can unsubscribe developers from a service. You may need to do this for one specific developer, or for multiple developers, in the event of a service deprecation. 15.1. Unsubscribing a single developer from services Unsubscribe a single developer from a service they are subscribed to through the Admin Portal: In the Admin Portal's Dashboard, navigate to Audience > Accounts > Listing > [select an account] > Service Subscriptions . Select Unsubscribe for the service that you want to remove the developer from. 15.2. Unsubscribing multiple developers from services Perform a bulk action to unsubscribe multiple developers from a deprecated or deleted service: Note This method only applies to services that have been deleted or suspended. You cannot perform a bulk unsubscription action on active services. In the Dashboard, navigate to: Audience > Accounts > Subscriptions . Do bulk state change. Using the service dropdown menu, identify the service from which you want to unsubscribe developers. Using the checkboxes on the left, select the developers you want to unsubscribe. Select Change State > Suspend to suspend the selected developer subscriptions. Remember that service plans need to be enabled. | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/unsubscribing_developers_from_a_service |
Chapter 20. Configuring Pre-Acknowledgments | Chapter 20. Configuring Pre-Acknowledgments Jakarta Messaging specifies three acknowledgement modes: AUTO_ACKNOWLEDGE CLIENT_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE In some cases you can afford to lose messages in the event of a failure, so it would make sense to acknowledge the message on the server before delivering it to the client. This extra mode is supported by JBoss EAP messaging and is called pre-acknowledge mode. The disadvantage of pre-acknowledging on the server before delivery is that the message will be lost if the server's system crashes after acknowledging the message but before it is delivered to the client. In that case, the message is lost and will not be recovered when the system restarts. Depending on your messaging case, pre-acknowledge mode can avoid extra network traffic and CPU usage at the cost of coping with message loss. An example use case for pre-acknowledgement is for stock price update messages. With these messages, it might be reasonable to lose a message in event of a crash since the price update message will arrive soon, overriding the price. Note If you use pre-acknowledge mode, you will lose transactional semantics for messages being consumed since they are being acknowledged first on the server, not when you commit the transaction. 20.1. Configuring the Server A connection factory can be configured to use pre-acknowledge mode by setting its pre-acknowledge attribute to true using the management CLI as below: 20.2. Configuring the Client Pre-acknowledge mode can be configured in the client's JNDI context environment, for example, in the jndi.properties file: Alternatively, to use pre-acknowledge mode using the Jakarta Messaging API, create a Jakarta Messaging Session with the ActiveMQSession.PRE_ACKNOWLEDGE constant. // messages will be acknowledge on the server *before* being delivered to the client Session session = connection.createSession(false, ActiveMQJMSConstants.PRE_ACKNOWLEDGE); | [
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=pre-acknowledge,value=true)",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connection.ConnectionFactory=tcp://localhost:8080?preAcknowledge=true",
"// messages will be acknowledge on the server *before* being delivered to the client Session session = connection.createSession(false, ActiveMQJMSConstants.PRE_ACKNOWLEDGE);"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/messaging_pre_acknowledgements |
Chapter 21. Workload partitioning | Chapter 21. Workload partitioning Workload partitioning separates compute node CPU resources into distinct CPU sets. The primary objective is to keep platform pods on the specified cores to avoid interrupting the CPUs the customer workloads are running on. Workload partitioning isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. This ensures that the remaining CPUs in the cluster deployment are untouched and available exclusively for non-platform workloads. The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs). In the context of enabling workload partitioning and managing CPU resources effectively, nodes that are not configured correctly will not be permitted to join the cluster through a node admission webhook. When the workload partitioning feature is enabled, the machine config pools for control plane and worker will be supplied with configurations for nodes to use. Adding new nodes to these pools will make sure they are correctly configured before joining the cluster. Currently, nodes must have uniform configurations per machine config pool to ensure that correct CPU affinity is set across all nodes within that pool. After admission, nodes within the cluster identify themselves as supporting a new resource type called management.workload.openshift.io/cores and accurately report their CPU capacity. Workload partitioning can be enabled during cluster installation only by adding the additional field cpuPartitioningMode to the install-config.yaml file. When workload partitioning is enabled, the management.workload.openshift.io/cores resource allows the scheduler to correctly assign pods based on the cpushares capacity of the host, not just the default cpuset . This ensures more precise allocation of resources for workload partitioning scenarios. Workload partitioning ensures that CPU requests and limits specified in the pod's configuration are respected. In OpenShift Container Platform 4.16 or later, accurate CPU usage limits are set for platform pods through CPU partitioning. As workload partitioning uses the custom resource type of management.workload.openshift.io/cores , the values for requests and limits are the same due to a requirement by Kubernetes for extended resources. However, the annotations modified by workload partitioning correctly reflect the desired limits. Note Extended resources cannot be overcommitted, so request and limit must be equal if both are present in a container spec. 21.1. Enabling workload partitioning With workload partitioning, cluster management pods are annotated to correctly partition them into a specified CPU affinity. These pods operate normally within the minimum size CPU configuration specified by the reserved value in the Performance Profile. Additional Day 2 Operators that make use of workload partitioning should be taken into account when calculating how many reserved CPU cores should be set aside for the platform. Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities. Note You can enable workload partitioning during cluster installation only. You cannot disable workload partitioning postinstallation. However, you can change the CPU configuration for reserved and isolated CPUs postinstallation. Use this procedure to enable workload partitioning cluster wide: Procedure In the install-config.yaml file, add the additional field cpuPartitioningMode and set it to AllNodes . apiVersion: v1 baseDomain: devcluster.openshift.com cpuPartitioningMode: AllNodes 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 1 Sets up a cluster for CPU partitioning at install time. The default value is None . 21.2. Performance profiles and workload partitioning Applying a performance profile allows you to make use of the workload partitioning feature. An appropriately configured performance profile specifies the isolated and reserved CPUs. The recommended way to create a performance profile is to use the Performance Profile Creator (PPC) tool to create the performance profile. Additional resources About the Performance Profile Creator 21.3. Sample performance profile configuration apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false Table 21.1. PerformanceProfile CR options for single-node OpenShift clusters PerformanceProfile CR field Description metadata.name Ensure that name matches the following fields set in related GitOps ZTP custom resources (CRs): include=openshift-node-performance-USD{PerformanceProfile.metadata.name} in TunedPerformancePatch.yaml name: 50-performance-USD{PerformanceProfile.metadata.name} in validatorCRs/informDuValidator.yaml spec.additionalKernelArgs "efi=runtime" Configures UEFI secure boot for the cluster host. spec.cpu.isolated Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match. Important The reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system. spec.cpu.reserved Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved. spec.hugepages.pages Set the number of huge pages ( count ) Set the huge pages size ( size ). Set node to the NUMA node where the hugepages are allocated ( node ) spec.realTimeKernel Set enabled to true to use the realtime kernel. spec.workloadHints Use workloadHints to define the set of top level flags for different type of workloads. The example configuration configures the cluster for low latency and high performance. Additional resources Recommended single-node OpenShift cluster configuration for vDU application workloads Workload partitioning | [
"apiVersion: v1 baseDomain: devcluster.openshift.com cpuPartitioningMode: AllNodes 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/enabling-workload-partitioning |
Chapter 3. Verifying OpenShift Data Foundation deployment for Internal-attached devices mode | Chapter 3. Verifying OpenShift Data Foundation deployment for Internal-attached devices mode Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Data Foundation cluster" . Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Table 3.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node csi-addons-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard under Overview tab, verify that both Storage Cluster and Data Resiliency has a green tick mark. In the Details card , verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_z/verifying_openshift_data_foundation_deployment_for_internal_attached_devices_mode |
Chapter 1. Automation controller overview | Chapter 1. Automation controller overview With Ansible Automation Platform users across an organization can share, vet, and manage automation content by means of a simple, powerful, and agentless technical implementation. IT managers can provide guidelines on how automation is applied to individual teams. Automation developers can write tasks that use existing knowledge, without the operational overhead of conforming to complex tools and frameworks. It is a more secure and stable foundation for deploying end-to-end automation solutions, from hybrid cloud to the edge. Ansible Automation Platform includes automation controller, which enables users to define, operate, scale, and delegate automation across their enterprise. 1.1. Real-time playbook output and exploration With automation controller you can watch playbooks run in real time, seeing each host as they check in. You can go back and explore the results for specific tasks and hosts in great detail, search for specific plays or hosts and see just those results, or locate errors that need to be corrected. 1.2. "Push Button" automation Use automation controller to access your favorite projects and re-trigger execution from the web interface. Automation controller asks for input variables, prompts for your credentials, starts and monitors jobs, and displays results and host history. 1.3. Simplified role-based access control and auditing With automation controller you can: Grant permissions to perform a specific task to different teams or explicit users through role-based access control (RBAC). Example tasks include viewing, creating, or modifying a file. Keep some projects private, while enabling some users to edit inventories, and others to run playbooks against certain systems, either in check (dry run) or live mode. Enable certain users to use credentials without exposing the credentials to them. Automation controller records the history of operations and who made them, including objects edited and jobs launched. If you want to give any user or team permissions to use a job template, you can assign permissions directly on the job template. Credentials are full objects in the automation controller RBAC system, and can be assigned to many users or teams for use. Automation controller includes an auditor type. A system-level auditor can see all aspects of the systems automation, but does not have permission to run or change automation. An auditor is useful for a service account that scrapes automation information from the REST API. Additional resources For more information about user roles, see Managing access with role based access control . 1.4. Cloud and autoscaling flexibility Automation controller includes a powerful optional provisioning callback feature that enables nodes to request configuration on-demand. This is an ideal solution for a cloud auto-scaling scenario and includes the following features: It integrates with provisioning servers such as Cobbler and deals with managed systems with unpredictable uptimes. It requires no management software to be installed on remote nodes. The callback solution can be triggered by a call to curl or wget , and can be embedded in init scripts, kickstarts, or preseeds. You can control access so that only machines listed in the inventory can request configuration. 1.5. The ideal RESTful API The automation controller REST API is the ideal RESTful API for a systems management application, with all resources fully discoverable, paginated, searchable, and well modeled. A styled API browser enables API exploration from the API root at http://<server name>/api/ , showing off every resource and relation. Everything that can be done in the user interface can be done in the API. 1.6. Backup and restore Ansible Automation Platform can backup and restore your systems or systems, making it easy for you to backup and replicate your instance as required. 1.7. Ansible Galaxy integration By including an Ansible Galaxy requirements.yml file in your project directory, automation controller automatically fetches the roles your playbook needs from Galaxy, GitHub, or your local source control. For more information, see Ansible Galaxy Support . 1.8. Inventory support for OpenStack Dynamic inventory support is available for OpenStack. With this you can target any of the virtual machines or images running in your OpenStack cloud. For more information, see OpenStack credential type . 1.9. Remote command execution Use remote command execution to perform a simple task, such as adding a single user, updating a single security vulnerability, or restarting a failing service. Any task that you can describe as a single Ansible play can be run on a host or group of hosts in your inventory. You can manage your systems quickly and easily. Because of an RBAC engine and detailed audit logging, you know which user has completed a specific task. 1.10. System tracking You can collect facts by using the fact caching feature. For more information, see Fact Caching . 1.11. Integrated notifications Keep track of the status of your automation. You can configure the following notifications: stackable notifications for job templates, projects, or entire organizations different notifications for job start, job success, job failure, and job approval (for workflow nodes) The following notification sources are supported: Email Grafana IRC Mattermost PagerDuty Rocket.Chat Slack Twilio Webhook (post to an arbitrary webhook, for integration into other tools) You can also customize notification messages for each of the preceding notification types. 1.12. Integrations Automation controller supports the following integrations: Dynamic inventory sources for Red Hat Satellite 6. For more information, see Red Hat Satellite 6 . Red Hat Insights integration, enabling Insights playbooks to be used as an Ansible Automation Platform project. For more information, see Setting up Red Hat Insights for Red Hat Ansible Automation Platform Remediations . Automation hub acts as a content provider for automation controller, requiring both an automation controller deployment and an automation hub deployment running alongside each other. 1.13. Custom Virtual Environments With Custom Ansible environment support you can have different Ansible environments and specify custom paths for different teams and jobs. 1.14. Authentication enhancements Automation controller supports: LDAP SAML token-based authentication With LDAP and SAML support you can integrate your enterprise account information in a more flexible manner. Token-based authentication permits authentication of third-party tools and services with automation controller through integrated OAuth 2 token support. 1.15. Cluster management Run time management of cluster groups enables configurable scaling. 1.16. Workflow enhancements To model your complex provisioning, deployment, and orchestration workflows, you can use automation controller expanded workflows in several ways: Inventory overrides for Workflows You can override an inventory across a workflow at workflow definition time, or at launch time. Use automation controller to define your application deployment workflows, and then re-use them in many environments. Convergence nodes for Workflows When modeling complex processes, you must sometimes wait for many steps to finish before proceeding. Automation controller workflows can replicate this; workflow steps can wait for any number of earlier workflow steps to complete properly before proceeding. Workflow Nesting You can re-use individual workflows as components of a larger workflow. Examples include combining provisioning and application deployment workflows into a single workflow. Workflow Pause and Approval You can build workflows containing approval nodes that require user intervention. This makes it possible to pause workflows in between playbooks so that a user can give approval (or denial) for continuing on to the step in the workflow. For more information, see Workflows in automation controller . 1.17. Job distribution Take a fact gathering or configuration job running across thousands of machines and divide it into slices that can be distributed across your automation controller cluster. This increases reliability, offers faster job completion, and improved cluster use. For example, you can change a parameter across 15,000 switches at scale, or gather information across your multi-thousand-node RHEL estate. For more information, see Job slicing . 1.18. Support for deployment in a FIPS-enabled environment Automation controller deploys and runs in restricted modes such as FIPS. 1.19. Limit the number of hosts per organization Many large organizations have instances shared among many organizations. To ensure that one organization cannot use all the licensed hosts, this feature enables superusers to set a specified upper limit on how many licensed hosts can that you can allocate to each organization. The automation controller algorithm factors changes in the limit for an organization and the number of total hosts across all organizations. Inventory updates fail if an inventory synchronization brings an organization out of compliance with the policy. Additionally, superusers are able to over-allocate their licenses, with a warning. 1.20. Inventory plugins The following inventory plugins are used from upstream collections: amazon.aws.aws_ec2 community.vmware.vmware_vm_inventory azure.azcollection.azure_rm google.cloud.gcp_compute theforeman.foreman.foreman openstack.cloud.openstack ovirt.ovirt.ovirt awx.awx.tower 1.21. Secret management system With a secret management system, external credentials are stored and supplied for use in automation controller so you need not provide them directly. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/assembly-ug-overview |
Planning your deployment | Planning your deployment Red Hat OpenShift Data Foundation 4.18 Important considerations when deploying Red Hat OpenShift Data Foundation 4.18 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see Installing on IBM Power . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. Note OpenShift Data Foundation's default configuration for MCG is optimized for low resource consumption and not performance. If you plan to use MCG often, see information about increasing resource limits in the knowledebase article Performance tuning guide for Multicloud Object Gateway . 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. For Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly a pre-existing one. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions . Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE ROSA with hosted control planes (HCP) Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications. Chapter 4. External storage services Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters. Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard that defines a set of security requirements for the use of cryptographic modules. Law mandates this standard for the US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation now uses the FIPS validated cryptographic modules. Red Hat Enterprise Linux OS/CoreOS (RHCOS) delivers these modules. Currently, the Cryptographic Module Validation Program (CMVP) processes the cryptography modules. You can see the state of these modules at Modules in Process List . For more up-to-date information, see the Red Hat Knowledgebase solution RHEL core crypto components . Note Enable the FIPS mode on the OpenShift Container Platform, before you install OpenShift Data Foundation. OpenShift Container Platform must run on the RHCOS nodes, as the feature does not support OpenShift Data Foundation deployment on Red Hat Enterprise Linux 7 (RHEL 7). For more information, see Installing a cluster in FIPS mode and Support for FIPS cryptography of the Installing guide in OpenShift Container Platform documentation. 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Previously, HashiCorp Vault was the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. As of OpenShift Data Foundation 4.12, Thales CipherTrust Manager has been introduced as an additional supported KMS. Important KMS is required for StorageClass encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default for block and file storage. You can enable encryption for the cluster at the time of deployment. The MultiCloud Object Gateway supports encryption by default. See the deployment guides for more information. OpenShift Data Foundation supports cluster wide encryption with and without Key Management System (KMS). Cluster wide encryption with KMS is supported using the following service providers: HashiCorp Vault Thales Cipher Trust Manager Security common practices require periodic encryption key rotation. OpenShift Data Foundation automatically rotates encryption keys stored in kubernetes secret (non-KMS) and Vault on a weekly basis. However, key rotation for Vault KMS must be enabled after the storage cluster creation and does not happen by default. For more information refer to the deployment guides. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Cluster wide encryption with HashiCorp Vault KMS provides two authentication methods: Token : This method allows authentication using vault tokens. A kubernetes secret containing the vault token is created in the openshift-storage namespace and is used for authentication. If this authentication method is selected then the administrator has to provide the vault token that provides access to the backend path in Vault, where the encryption keys are stored. Kubernetes : This method allows authentication with vault using serviceaccounts. If this authentication method is selected then the administrator has to provide the name of the role configured in Vault that provides access to the backend path, where the encryption keys are stored. The value of this role is then added to the ocs-kms-connection-details config map. Note OpenShift Data Foundation on IBM Cloud platform supports Hyper Protect Crypto Services (HPCS) Key Management Services (KMS) as the encryption solution in addition to HashiCorp Vault KMS. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher with HashiCorp Vault KMS. Storage class encryption is supported in OpenShift Data Foundation 4.12 or higher with both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.3. CipherTrust manager Red Hat OpenShift Data Foundation version 4.12 introduced Thales CipherTrust Manager as an additional Key Management System (KMS) provider for your deployment. Thales CipherTrust Manager provides centralized key lifecycle management. CipherTrust Manager supports Key Management Interoperability Protocol (KMIP), which enables communication between key management systems. CipherTrust Manager is enabled during deployment. 5.3.4. Data encryption in-transit via Red Hat Ceph Storage's messenger version 2 protocol (msgr2) Starting with OpenShift Data Foundation version 4.14, Red Hat Ceph Storage's messenger version 2 protocol can be used to encrypt data in-transit. This provides an important security requirement for your infrastructure. In-transit encryption can be enabled during deployment while the cluster is being created. See the deployment guide for your environment for instructions on enabling data encryption in-transit during cluster creation. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx. Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx. Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . 5.4. Encryption in Transit You need to enable IPsec so that all the network traffic between the nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. By default, IPsec is disabled. You can enable it either during or after installing the cluster. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. For more information on how to configure the IPsec encryption, see Configuring IPsec encryption of the Networking guide in OpenShift Container Platform documentation. Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. Virtualized OpenShift nodes using logical CPU threads, also known as simultaneous multithreading (SMT) for AMD EPYC CPUs or hyperthreading with Intel CPUs, calculate their core utilization for OpenShift subscriptions based on the number of cores/CPUs assigned to the node, however each subscription covers 4 vCPUs/cores when logical CPU threads are used. Red Hat's subscription management tools assume logical CPU threads are enabled by default on all systems. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide. Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.17 is supported only on OpenShift Container Platform version 4.17 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both, storage device requirements and have a storage class that provides, EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 7.0 or later vSphere 8.0 or later For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the, storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, an azure disk via the azure-disk provisioner. 7.1.5. Google Cloud Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both, storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.9. ROSA with hosted control planes (HCP) Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides AWS EBS volumes via gp3-csi provisioner. 7.1.10. Any platform Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate avaliable resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.3.6. Resource requirements for performance profiles OpenShift Data Foundation provides three performance profiles to enhance the performance of the clusters. You can choose one of these profiles based on your available resources and desired performance level during deployment or post deployment. Table 7.7. Recommended resource requirement for different performance profiles Performance profile CPU Memory Lean 24 72 GiB Balanced 30 72 GiB Performance 45 96 GiB Important Make sure to select the profiles based on the available free resources as you might already be running other workloads. 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.8. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.9. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks . You can configure your Multus networks to use IPv4 or IPv6 as a technology preview. This works only for Multus networks that are pure IPv4 or pure IPv6. Networks cannot be mixed mode. Important Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Service Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See Technology Preview Features Support Scope for more information. 8.2.1. Multus prerequisites In order for Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. This section will help clarify questions that could arise. Two basic requirements must be met: OpenShift hosts must be able to route successfully to the Multus public network. Pods on the Multus public network must be able to route successfully to OpenShift hosts. These two requirements can be broken down further as follows: For routing Kubernetes hosts to the Multus public network, each host must ensure the following: The host must have an interface connected to the Multus public network (the "public-network-interface"). The "public-network-interface" must have an IP address. A route must exist to direct traffic destined for pods on the Multus public network through the "public-network-interface". For routing pods on the Multus public network to Kubernetes hosts, the public NetworkAttachmentDefinition must be configured to ensure the following: The definition must have its IP Address Management (IPAM) configured to route traffic destined for nodes through the network. To ensure routing between the two networks works properly, no IP address assigned to a node can overlap with any IP address assigned to a pod on the Multus public network. Generally, both the NetworkAttachmentDefinition, and node configurations must use the same network technology (Macvlan) to connect to the Multus public network. Node configurations and pod configurations are interrelated and tightly coupled. Both must be planned at the same time, and OpenShift Data Foundation cannot support Multus public networks without both. The "public-network-interface" must be the same for both. Generally, the connection technology (Macvlan) should also be the same for both. IP range(s) in the NetworkAttachmentDefinition must be encoded as routes on nodes, and, in mirror, IP ranges for nodes must be encoded as routes in the NetworkAttachmentDefinition. Some installations might not want to use the same public network IP address range for both pods and nodes. In the case where there are different ranges for pods and nodes, additional steps must be taken to ensure each range routes to the other so that they act as a single, contiguous network.These requirements require careful planning. See Multus examples to help understand and implement these requirements. Tip There are often ten or more OpenShift Data Foundation pods per storage node. The pod address space usually needs to be several times larger (or more) than the host address space. OpenShift Container Platform recommends using the NMState operator's NodeNetworkConfigurationPolicies as a good method of configuring hosts to meet host requirements. Other methods can be used as well if needed. 8.2.1.1. Multus network address space sizing Networks must have enough addresses to account for the number of storage pods that will attach to the network, plus some additional space to account for failover events. It is highly recommended to also plan ahead for future storage cluster expansion and estimate how large the OpenShift Container Platform and OpenShift Data Foundation clusters may grow in the future. Reserving addresses for future expansion means that there is lower risk of depleting the IP address pool unexpectedly during expansion. It is safest to allocate 25% more addresses (or more) than the total maximum number of addresses that are expected to be needed at one time in the storage cluster's lifetime. This helps lower the risk of depleting the IP address pool during failover and maintenance. For ease of writing corresponding network CIDR configurations, rounding totals up to the nearest power of 2 is also recommended. Three ranges must be planned: If used, the public Network Attachment Definition address space must include enough IPs for the total number of ODF pods running in the openshift-storage namespace If used, the cluster Network Attachment Definition address space must include enough IPs for the total number of OSD pods running in the openshift-storage namespace If the Multus public network is used, the node public network address space must include enough IPs for the total number of OpenShift nodes connected to the Multus public network. Note If the cluster uses a unified address space for the public Network Attachment Definition and node public network attachments, add these two requirements together. This is relevant, for example, if DHCP is used to manage IPs for the public network. Important For users with environments with piecewise CIDRs, that is one network with two or more different CIDRs, auto-detection is likely to find only a single CIDR, meaning Ceph daemons may fail to start or fail to connect to the network. See this knowledgebase article for information to mitigate this issue. 8.2.1.1.1. Recommendation The following recommendation suffices for most organizations. The recommendation uses the last 6.25% (1/16) of the reserved private address space (192.168.0.0/16), assuming the beginning of the range is in use or otherwise desirable. Approximate maximums (accounting for 25% overhead) are given. Table 8.1. Multus recommendations Network Network range CIDR Approximate maximums Public Network Attachment Definition 192.168.240.0/21 1,600 total ODF pods Cluster Network Attachment Definition 192.168.248.0/22 800 OSDs Node public network attachments 192.168.252.0/23 400 total nodes 8.2.1.1.2. Calculation More detailed address space sizes can be determined as follows: Determine the maximum number of OSDs that are likely to be needed in the future. Add 25%, then add 5. Round the result up to the nearest power of 2. This is the cluster address space size. Begin with the un-rounded number calculated in step 1. Add 64, then add 25%. Round the result up to the nearest power of 2. This is the public address space size for pods. Determine the maximum number of total OpenShift nodes (including storage nodes) that are likely to be needed in the future. Add 25%. Round the result up to the nearest power of 2. This is the public address space size for nodes. 8.2.1.2. Verifying requirements have been met After configuring nodes and creating the Multus public NetworkAttachmentDefinition (see Creating network attachment definitions ) check that the node configurations and NetworkAttachmentDefinition configurations are compatible. To do so, verify that each node can ping pods via the public network. Start a daemonset similar to the following example: List the Multus public network IPs assigned to test pods using a command like the following example. This example command lists all IPs assigned to all test pods (each will have 2 IPs). From the output, it is easy to manually extract the IPs associated with the Multus public network. In the example, test pod IPs on the Multus public network are: 192.168.20.22 192.168.20.29 192.168.20.23 Check that each node (NODE) can reach all test pod IPs over the public network: If any node does not get a successful ping to a running pod, it is not safe to proceed. Diagnose and fix the issue, then repeat this testing. Some reasons you may encounter a problem include: The host may not be properly attached to the Multus public network (via Macvlan) The host may not be properly configured to route to the pod IP range The public NetworkAttachmentDefinition may not be properly configured to route back to the host IP range The host may have a firewall rule blocking the connection in either direction The network switch may have a firewall or security rule blocking the connection Suggested debugging steps: Ensure nodes can ping each other over using public network "shim" IPs Ensure the output of ip address 8.2.2. Multus examples The relevant network plan for this cluster is as follows: A dedicated NIC provides eth0 for the Multus public network Macvlan will be used to attach OpenShift pods to eth0 The IP range 192.168.0.0/16 is free in the example cluster - pods and nodes will share this IP range on the Multus public network Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 Kubernetes hosts, more than the example organization will ever need) Pods will get the remainder of the ranges (192.168.0.1 to 192.168.251.255) The example organization does not want to use DHCP unless necessary; therefore, nodes will have IPs on the Multus network (via eth0) assigned statically using the NMState operator 's NodeNetworkConfigurationPolicy resources With DHCP unavailable, Whereabouts will be used to assign IPs to the Multus public network because it is easy to use out of the box There are 3 compute nodes in the OpenShift cluster on which OpenShift Data Foundation also runs: compute-0, compute-1, and compute-2 Nodes' network policies must be configured to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Generally speaking, the host must connect to the Multus public network using the same technology that pods do. Pod connections are configured in the Network Attachment Definition. Because the host IP range is a subset of the whole range, hosts are not able to route to pods simply by IP assignment. A route must be added to hosts to allow them to route to the whole 192.168.0.0/16 range. NodeNetworkConfigurationPolicy desiredState specs will look like the following: For static IP management, each node must have a different NodeNetworkConfigurationPolicy. Select separate nodes for each policy to configure static networks. A "shim" interface is used to connect hosts to the Multus public network using the same technology as the Network Attachment Definition will use. The host's "shim" must be of the same type as planned for pods, macvlan in this example. The interface must match the Multus public network interface selected in planning, eth0 in this example. The ipv4 (or ipv6` ) section configures node IP addresses on the Multus public network. IPs assigned to this node's shim must match the plan. This example uses 192.168.252.0/22 for node IPs on the Multus public network. For static IP management, don't forget to change the IP for each node. The routes section instructs nodes how to reach pods on the Multus public network. The route destination(s) must match the CIDR range planned for pods. In this case, it is safe to use the entire 192.168.0.0/16 range because it won't affect nodes' ability to reach other nodes over their "shim" interfaces. In general, this must match the CIDR used in the Multus public NetworkAttachmentDefinition. The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' exclude option to simplify the range request. The Whereabouts routes[].dst option ensures pods route to hosts via the Multus public network. This must match the plan for how to attach pods to the Multus public network. Nodes must attach using the same technology, Macvlan. The interface must match the Multus public network interface selected in planning, eth0 in this example. The plan for this example uses whereabouts instead of DHCP for assigning IPs to pods. For this example, it was decided that pods could be assigned any IP in the range 192.168.0.0/16 with the exception of a portion of the range allocated to nodes (see 5). whereabouts provides an exclude directive that allows easily excluding the range allocated for nodes from its pool. This allows keeping the range directive (see 4 ) simple. The routes section instructs pods how to reach nodes on the Multus public network. The route destination ( dst ) must match the CIDR range planned for nodes. 8.2.3. Holder pod deprecation Due to the recurring maintenance impact of holder pods during upgrade (holder pods are present when Multus is enabled), holder pods are deprecated in the ODF v4.18 release and targeted for removal in the ODF v4.18 release. This deprecation requires completing additional network configuration actions before removing the holder pods. In ODF v4.16, clusters with Multus enabled are upgraded to v4.17 following standard upgrade procedures. After the ODF cluster (with Multus enabled) is successfully upgraded to v4.17, administrators must then complete the procedure documented in the article Disabling Multus holder pods to disable and remove holder pods. Be aware that this disabling procedure is time consuming; however, it is not critical to complete the entire process immediately after upgrading to v4.17. It is critical to complete the process before ODF is upgraded to v4.18. 8.2.4. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.5. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.6. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports the macvlan driver, which includes the following features: Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Uses less CPU and provides better throughput than Linux bridge or ipvlan . Bridge mode is almost always the best choice. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.7. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal. Chapter 9. Disaster Recovery Disaster Recovery (DR) helps an organization to recover and resume business critical functions or normal operations when there are disruptions or disasters. OpenShift Data Foundation provides High Availability (HA) & DR solutions for stateful apps which are broadly categorized into two broad categories: Metro-DR : Single Region and cross data center protection with no data loss. Regional-DR : Cross Region protection with minimal potential data loss. Disaster Recovery with stretch cluster : Single OpenShift Data Foundation cluster is stretched between two different locations to provide the storage infrastructure with disaster recovery capabilities. 9.1. Metro-DR Metropolitan disaster recovery (Metro-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM), Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. This release of Metro-DR solution provides volume persistent data and metadata replication across sites that are geographically dispersed. In the public cloud these would be similar to protecting from an Availability Zone failure. Metro-DR ensures business continuity during the unavailability of a data center with no data loss. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. Note Hub recovery for Metro-DR is a Technology Preview feature and is subject to Technology Preview support limitations. For detailed solution requirements, see Metro-DR requirements , deployment requirements for Red Hat Ceph Storage stretch cluster with arbiter and RHACM requirements . 9.2. Regional-DR Regional disaster recovery (Regional-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM) and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. It is built on Asynchronous data replication and hence could have a potential data loss but provides the protection against a broad set of failures. Red Hat OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook and it's enhanced with the ability to: Enable pools for mirroring. Automatically mirror images across RBD pools. Provides csi-addons to manage per Persistent Volume Claim mirroring. This release of Regional-DR supports Multi-Cluster configuration that is deployed across different regions and data centers. For example, a 2-way replication across two managed clusters located in two different regions or data centers. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Regional disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For detailed solution requirements, see Regional-DR requirements and RHACM requirements . 9.3. Disaster Recovery with stretch cluster In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This feature is currently intended for deployment in the OpenShift Container Platform on-premises and in the same location. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. To use the stretch cluster, You must have a minimum of five nodes across three zones, where: Two nodes per zone are used for each data-center zone, and one additional zone with one node is used for arbiter zone (the arbiter can be on a master node). All the nodes must be manually labeled with the zone labels prior to cluster creation. For example, the zones can be labeled as: topology.kubernetes.io/zone=arbiter (master or worker node) topology.kubernetes.io/zone=datacenter1 (minimum two worker nodes) topology.kubernetes.io/zone=datacenter2 (minimum two worker nodes) For more information, see Configuring OpenShift Data Foundation for stretch cluster . To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . Chapter 11. Supported and Unsupported features for IBM Power and IBM Z Table 11.1. List of supported and unsupported features on IBM Power and IBM Z Features IBM Power IBM Z Compact deployment Unsupported Unsupported Dynamic storage devices Unsupported Supported Stretched Cluster - Arbiter Supported Unsupported Federal Information Processing Standard Publication (FIPS) Unsupported Unsupported Ability to view pool compression metrics Supported Unsupported Automated scaling of Multicloud Object Gateway (MCG) endpoint pods Supported Unsupported Alerts to control overprovision Supported Unsupported Alerts when Ceph Monitor runs out of space Supported Unsupported Extended OpenShift Data Foundation control plane which allows pluggable external storage such as IBM Flashsystem Unsupported Unsupported IPV6 support Unsupported Unsupported Multus Unsupported Unsupported Multicloud Object Gateway (MCG) bucket replication Supported Unsupported Quota support for object data Supported Unsupported Minimum deployment Unsupported Unsupported Regional-Disaster Recovery (Regional-DR) with Red Hat Advanced Cluster Management (RHACM) Supported Unsupported Metro-Disaster Recovery (Metro-DR) multiple clusters with RHACM Supported Supported Single Node solution for Radio Access Network (RAN) Unsupported Unsupported Support for network file system (NFS) services Supported Unsupported Ability to change Multicloud Object Gateway (MCG) account credentials Supported Unsupported Multicluster monitoring in Red Hat Advanced Cluster Management console Supported Unsupported Deletion of expired objects in Multicloud Object Gateway lifecycle Supported Unsupported Agnostic deployment of OpenShift Data Foundation on any Openshift supported platform Unsupported Unsupported Installer provisioned deployment of OpenShift Data Foundation using bare metal infrastructure Unsupported Unsupported Openshift dual stack with OpenShift Data Foundation using IPv4 Unsupported Unsupported Ability to disable Multicloud Object Gateway external service during deployment Unsupported Unsupported Ability to allow overriding of default NooBaa backing store Supported Unsupported Allowing ocs-operator to deploy two MGR pods, one active and one standby Supported Unsupported Disaster Recovery for brownfield deployments Unsupported Supported Automatic scaling of RGW Unsupported Unsupported Chapter 12. steps To start deploying your OpenShift Data Foundation, you can use the internal mode within OpenShift Container Platform or use external mode to make available services from a cluster running outside of OpenShift Container Platform. Depending on your requirement, go to the respective deployment guides. Internal mode Deploying OpenShift Data Foundation using Amazon web services Deploying OpenShift Data Foundation using Bare Metal Deploying OpenShift Data Foundation using VMWare vSphere Deploying OpenShift Data Foundation using Microsoft Azure Deploying OpenShift Data Foundation using Google Cloud Deploying OpenShift Data Foundation using Red Hat OpenStack Platform [Technology Preview] Deploying OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation on IBM Z Deploying OpenShift Data Foundation on any platform External mode Deploying OpenShift Data Foundation in external mode Internal or external For deploying multiple clusters, see Deploying multiple OpenShift Data Foundation clusters . | [
"apiVersion: apps/v1 kind: DaemonSet metadata: name: multus-public-test namespace: openshift-storage labels: app: multus-public-test spec: selector: matchLabels: app: multus-public-test template: metadata: labels: app: multus-public-test annotations: k8s.v1.cni.cncf.io/networks: openshift-storage/public-net # spec: containers: - name: test image: quay.io/ceph/ceph:v18 # image known to have 'ping' installed command: - sleep - infinity resources: {}",
"oc -n openshift-storage describe pod -l app=multus-public-test | grep -o -E 'Add .* from .*' Add eth0 [10.128.2.86/23] from ovn-kubernetes Add net1 [192.168.20.22/24] from default/public-net Add eth0 [10.129.2.173/23] from ovn-kubernetes Add net1 [192.168.20.29/24] from default/public-net Add eth0 [10.131.0.108/23] from ovn-kubernetes Add net1 [192.168.20.23/24] from default/public-net",
"oc debug node/NODE Starting pod/NODE-debug To use host binaries, run `chroot /host` Pod IP: **** If you don't see a command prompt, try pressing enter. sh-5.1# chroot /host sh-5.1# ping 192.168.20.22 PING 192.168.20.22 (192.168.20.22) 56(84) bytes of data. 64 bytes from 192.168.20.22: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 192.168.20.22: icmp_seq=2 ttl=64 time=0.056 ms ^C --- 192.168.20.22 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1046ms rtt min/avg/max/mdev = 0.056/0.074/0.093/0.018 ms sh-5.1# ping 192.168.20.29 PING 192.168.20.29 (192.168.20.29) 56(84) bytes of data. 64 bytes from 192.168.20.29: icmp_seq=1 ttl=64 time=0.403 ms 64 bytes from 192.168.20.29: icmp_seq=2 ttl=64 time=0.181 ms ^C --- 192.168.20.29 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1007ms rtt min/avg/max/mdev = 0.181/0.292/0.403/0.111 ms sh-5.1# ping 192.168.20.23 PING 192.168.20.23 (192.168.20.23) 56(84) bytes of data. 64 bytes from 192.168.20.23: icmp_seq=1 ttl=64 time=0.329 ms 64 bytes from 192.168.20.23: icmp_seq=2 ttl=64 time=0.227 ms ^C --- 192.168.20.23 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1047ms rtt min/avg/max/mdev = 0.227/0.278/0.329/0.051 ms",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-0 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-0 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-0 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-1 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-1 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-1 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-2 # [1] namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-2 # [2] desiredState: Interfaces: [3] - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan # [4] state: up mac-vlan: base-iface: eth0 # [5] mode: bridge promiscuous: true ipv4: # [6] enabled: true dhcp: false address: - ip: 192.168.252.2 # STATIC IP FOR compute-2 # [7] prefix-length: 22 routes: # [8] config: - destination: 192.168.0.0/16 # [9] next-hop-interface: odf-pub-shim",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", # [1] \"master\": \"eth0\", # [2] \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", # [3] \"range\": \"192.168.0.0/16\", # [4] \"exclude\": [ \"192.168.252.0/22\" # [5] ], \"routes\": [ # [6] {\"dst\": \"192.168.252.0/22\"} # [7] ] } }'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/planning_your_deployment/dynamic_storage_devices |
Chapter 4. Advanced configuration | Chapter 4. Advanced configuration 4.1. Advanced configuration This chapter describes how to use Custom Resources (CRs) for advanced configuration of your Red Hat build of Keycloak deployment. 4.1.1. Server configuration details Many server options are exposed as first-class citizen fields in the Keycloak CR. The structure of the CR is based on the configuration structure of Red Hat build of Keycloak. For example, to configure the https-port of the server, follow a similar pattern in the CR and use the httpsPort field. The following example is a complex server configuration; however, it illustrates the relationship between server options and the Keycloak CR: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres usernameSecret: name: usernameSecret key: usernameSecretKey passwordSecret: name: passwordSecret key: passwordSecretKey host: host database: database port: 123 schema: schema poolInitialSize: 1 poolMinSize: 2 poolMaxSize: 3 http: httpEnabled: true httpPort: 8180 httpsPort: 8543 tlsSecret: my-tls-secret hostname: hostname: my-hostname admin: my-admin-hostname strict: false strictBackchannel: false features: enabled: - docker - authorization disabled: - admin - step-up-authentication transaction: xaEnabled: false For a list of options, see the Keycloak CRD. For details on configuring options, see All configuration . 4.1.1.1. Additional options Some expert server options are unavailable as dedicated fields in the Keycloak CR. The following are examples of omitted fields: Fields that require deep understanding of the underlying Red Hat build of Keycloak implementation Fields that are not relevant to an OpenShift environment Fields for provider configuration because they are dynamic based on the used provider implementation The additionalOptions field of the Keycloak CR enables Red Hat build of Keycloak to accept any available configuration in the form of key-value pairs. You can use this field to include any option that is omitted in the Keycloak CR. For details on configuring options, see All configuration . The values can be expressed as plain text strings or Secret object references as shown in this example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... additionalOptions: - name: spi-connections-http-client-default-connection-pool-size secret: # Secret reference name: http-client-secret # name of the Secret key: poolSize # name of the Key in the Secret - name: spi-email-template-mycustomprovider-enabled value: true # plain text value Note The name format of options defined in this way is identical to the key format of options specified in the configuration file. For details on various configuration formats, see Configuring Red Hat build of Keycloak . 4.1.2. Secret References Secret References are used by some dedicated options in the Keycloak CR, such as tlsSecret , or as a value in additionalOptions . Similarly ConfigMap References are used by options such as the configMapFile . When specifying a Secret or ConfigMap Reference, make sure that a Secret or ConfigMap containing the referenced keys is present in the same namespace as the CR referencing it. The operator will poll approximately every minute for changes to referenced Secrets or ConfigMaps. When a meaningful change is detected, the Operator performs a rolling restart of the Red Hat build of Keycloak Deployment to pick up the changes. 4.1.3. Unsupported features The unsupported field of the CR contains highly experimental configuration options that are not completely tested and are Tech Preview. 4.1.3.1. Pod Template The Pod Template is a raw API representation that is used for the Deployment Template. This field is a temporary workaround in case no supported field exists at the top level of the CR for your use case. The Operator merges the fields of the provided template with the values generated by the Operator for the specific Deployment. With this feature, you have access to a high level of customizations. However, no guarantee exists that the Deployment will work as expected. The following example illustrates injecting labels, annotations, volumes, and volume mounts: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... unsupported: podTemplate: metadata: labels: my-label: "keycloak" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: keycloak-additional-secret 4.1.4. Disabling required options Red Hat build of Keycloak and the Red Hat build of Keycloak Operator provide the best production-ready experience with security in mind. However, during the development phase, you can disable key security features. Specifically, you can disable the hostname and TLS as shown in the following example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... http: httpEnabled: true hostname: strict: false strictBackchannel: false 4.1.5. Resource requirements The Keycloak CR allows specifying the resources options for managing compute resources for the Red Hat build of Keycloak container. It provides the ability to request and limit resources independently for the main Keycloak deployment via the Keycloak CR, and for the realm import Job via the Realm Import CR. When no values are specified, the default requests memory is set to 1700MiB , and the limits memory is set to 2GiB . These values were chosen based on a deeper analysis of Red Hat build of Keycloak memory management. If no values are specified in the Realm Import CR, it falls back to the values specified in the Keycloak CR, or to the defaults as defined above. You can specify your custom values based on your requirements as follows: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... resources: requests: cpu: 1200m memory: 896Mi limits: cpu: 6 memory: 3Gi Moreover, the Red Hat build of Keycloak container manages the heap size more effectively by providing relative values for the heap size. It is achieved by providing certain JVM options. For more details, see Running Red Hat build of Keycloak in a container . 4.1.6. Truststores If you need to provide trusted certificates, the Keycloak CR provides a top level feature for configuring the server's truststore as discussed in Configuring trusted certificates . Use the truststores stanza of the Keycloak spec to specify Secrets containing PEM encoded files, or PKCS12 files with extension .p12 or .pfx , for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... truststores: my-truststore: secret: name: my-secret Where the contents of my-secret could be a PEM file, for example: apiVersion: v1 kind: Secret metadata: name: my-secret stringData: cert.pem: | -----BEGIN CERTIFICATE----- ... When running on a Kubernetes or OpenShift environment well-known locations of trusted certificates are included automatically. This includes /var/run/secrets/kubernetes.io/serviceaccount/ca.crt and the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt when present. | [
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres usernameSecret: name: usernameSecret key: usernameSecretKey passwordSecret: name: passwordSecret key: passwordSecretKey host: host database: database port: 123 schema: schema poolInitialSize: 1 poolMinSize: 2 poolMaxSize: 3 http: httpEnabled: true httpPort: 8180 httpsPort: 8543 tlsSecret: my-tls-secret hostname: hostname: my-hostname admin: my-admin-hostname strict: false strictBackchannel: false features: enabled: - docker - authorization disabled: - admin - step-up-authentication transaction: xaEnabled: false",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: additionalOptions: - name: spi-connections-http-client-default-connection-pool-size secret: # Secret reference name: http-client-secret # name of the Secret key: poolSize # name of the Key in the Secret - name: spi-email-template-mycustomprovider-enabled value: true # plain text value",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: unsupported: podTemplate: metadata: labels: my-label: \"keycloak\" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: keycloak-additional-secret",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: http: httpEnabled: true hostname: strict: false strictBackchannel: false",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: resources: requests: cpu: 1200m memory: 896Mi limits: cpu: 6 memory: 3Gi",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: truststores: my-truststore: secret: name: my-secret",
"apiVersion: v1 kind: Secret metadata: name: my-secret stringData: cert.pem: | -----BEGIN CERTIFICATE-----"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/operator_guide/advanced-configuration- |
Chapter 4. Red Hat Enterprise Linux 6 | Chapter 4. Red Hat Enterprise Linux 6 This section outlines the packages released for Red Hat Enterprise Linux 6. 4.1. Red Hat Satellite Client 6 (for RHEL 6 Server - ELS) (RPMs) The following table outlines the packages included in the rhel-6-server-els-satellite-client-6-rpms repository. Table 4.1. Red Hat Satellite Client 6 (for RHEL 6 Server - ELS) (RPMs) Name Version Advisory gofer 2.11.9-1.el6sat RHBA-2022:96562 katello-agent 3.5.7-3.el6sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el6sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el6sat RHBA-2022:96562 pulp-rpm-handlers 2.21.5-2.el6sat RHBA-2022:96562 puppet-agent 7.16.0-2.el6sat RHBA-2022:96562 python-gofer 2.11.9-1.el6sat RHBA-2022:96562 python-gofer-proton 2.11.9-1.el6sat RHBA-2022:96562 python-isodate 0.5.0-4.el6sat RHBA-2022:96562 python-pulp-agent-lib 2.21.5-2.el6sat RHBA-2022:96562 python-pulp-common 2.21.5-2.el6sat RHBA-2022:96562 python-pulp-rpm-common 2.21.5-2.el6sat RHBA-2022:96562 python-qpid-proton 0.28.0-3.el6_10 RHBA-2022:96562 qpid-proton-c 0.28.0-3.el6_10 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el6sat RHBA-2022:96562 rubygem-json 1.4.6-2.el6 RHBA-2022:96562 4.2. Red Hat Satellite Client 6 (for RHEL 6 for System Z - ELS) (RPMs) The following table outlines the packages included in the rhel-6-for-system-z-els-satellite-client-6-rpms repository. Table 4.2. Red Hat Satellite Client 6 (for RHEL 6 for System Z - ELS) (RPMs) Name Version Advisory gofer 2.11.9-1.el6sat RHBA-2022:96562 katello-agent 3.5.7-3.el6sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el6sat RHBA-2022:96562 katello-host-tools-fact-plugin 3.5.7-3.el6sat RHBA-2022:96562 pulp-rpm-handlers 2.21.5-2.el6sat RHBA-2022:96562 python-gofer 2.11.9-1.el6sat RHBA-2022:96562 python-gofer-proton 2.11.9-1.el6sat RHBA-2022:96562 python-isodate 0.5.0-4.el6sat RHBA-2022:96562 python-pulp-agent-lib 2.21.5-2.el6sat RHBA-2022:96562 python-pulp-common 2.21.5-2.el6sat RHBA-2022:96562 python-pulp-rpm-common 2.21.5-2.el6sat RHBA-2022:96562 python-qpid-proton 0.28.0-3.el6_10 RHBA-2022:96562 qpid-proton-c 0.28.0-3.el6_10 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el6sat RHBA-2022:96562 rubygem-json 1.4.6-2.el6 RHBA-2022:96562 | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/package_manifest/sat-6-15-rhel6 |
Chapter 6. ConsolePlugin [console.openshift.io/v1] | Chapter 6. ConsolePlugin [console.openshift.io/v1] Description ConsolePlugin is an extension for customizing OpenShift web console by dynamically loading code from another service running on the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsolePluginSpec is the desired plugin configuration. 6.1.1. .spec Description ConsolePluginSpec is the desired plugin configuration. Type object Required backend displayName Property Type Description backend object backend holds the configuration of backend which is serving console's plugin . displayName string displayName is the display name of the plugin. The dispalyName should be between 1 and 128 characters. i18n object i18n is the configuration of plugin's localization resources. proxy array proxy is a list of proxies that describe various service type to which the plugin needs to connect to. proxy[] object ConsolePluginProxy holds information on various service types to which console's backend will proxy the plugin's requests. 6.1.2. .spec.backend Description backend holds the configuration of backend which is serving console's plugin . Type object Required type Property Type Description service object service is a Kubernetes Service that exposes the plugin using a deployment with an HTTP server. The Service must use HTTPS and Service serving certificate. The console backend will proxy the plugins assets from the Service using the service CA bundle. type string type is the backend type which servers the console's plugin. Currently only "Service" is supported. --- 6.1.3. .spec.backend.service Description service is a Kubernetes Service that exposes the plugin using a deployment with an HTTP server. The Service must use HTTPS and Service serving certificate. The console backend will proxy the plugins assets from the Service using the service CA bundle. Type object Required name namespace port Property Type Description basePath string basePath is the path to the plugin's assets. The primary asset it the manifest file called plugin-manifest.json , which is a JSON document that contains metadata about the plugin and the extensions. name string name of Service that is serving the plugin assets. namespace string namespace of Service that is serving the plugin assets. port integer port on which the Service that is serving the plugin is listening to. 6.1.4. .spec.i18n Description i18n is the configuration of plugin's localization resources. Type object Required loadType Property Type Description loadType string loadType indicates how the plugin's localization resource should be loaded. Valid values are Preload, Lazy and the empty string. When set to Preload, all localization resources are fetched when the plugin is loaded. When set to Lazy, localization resources are lazily loaded as and when they are required by the console. When omitted or set to the empty string, the behaviour is equivalent to Lazy type. 6.1.5. .spec.proxy Description proxy is a list of proxies that describe various service type to which the plugin needs to connect to. Type array 6.1.6. .spec.proxy[] Description ConsolePluginProxy holds information on various service types to which console's backend will proxy the plugin's requests. Type object Required alias endpoint Property Type Description alias string alias is a proxy name that identifies the plugin's proxy. An alias name should be unique per plugin. The console backend exposes following proxy endpoint: /api/proxy/plugin/<plugin-name>/<proxy-alias>/<request-path>?<optional-query-parameters> Request example path: /api/proxy/plugin/acm/search/pods?namespace=openshift-apiserver authorization string authorization provides information about authorization type, which the proxied request should contain caCertificate string caCertificate provides the cert authority certificate contents, in case the proxied Service is using custom service CA. By default, the service CA bundle provided by the service-ca operator is used. endpoint object endpoint provides information about endpoint to which the request is proxied to. 6.1.7. .spec.proxy[].endpoint Description endpoint provides information about endpoint to which the request is proxied to. Type object Required type Property Type Description service object service is an in-cluster Service that the plugin will connect to. The Service must use HTTPS. The console backend exposes an endpoint in order to proxy communication between the plugin and the Service. Note: service field is required for now, since currently only "Service" type is supported. type string type is the type of the console plugin's proxy. Currently only "Service" is supported. --- 6.1.8. .spec.proxy[].endpoint.service Description service is an in-cluster Service that the plugin will connect to. The Service must use HTTPS. The console backend exposes an endpoint in order to proxy communication between the plugin and the Service. Note: service field is required for now, since currently only "Service" type is supported. Type object Required name namespace port Property Type Description name string name of Service that the plugin needs to connect to. namespace string namespace of Service that the plugin needs to connect to port integer port on which the Service that the plugin needs to connect to is listening on. 6.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleplugins DELETE : delete collection of ConsolePlugin GET : list objects of kind ConsolePlugin POST : create a ConsolePlugin /apis/console.openshift.io/v1/consoleplugins/{name} DELETE : delete a ConsolePlugin GET : read the specified ConsolePlugin PATCH : partially update the specified ConsolePlugin PUT : replace the specified ConsolePlugin 6.2.1. /apis/console.openshift.io/v1/consoleplugins Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsolePlugin Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsolePlugin Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ConsolePluginList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsolePlugin Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body ConsolePlugin schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 201 - Created ConsolePlugin schema 202 - Accepted ConsolePlugin schema 401 - Unauthorized Empty 6.2.2. /apis/console.openshift.io/v1/consoleplugins/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the ConsolePlugin Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsolePlugin Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsolePlugin Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsolePlugin Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsolePlugin Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body ConsolePlugin schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 201 - Created ConsolePlugin schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/console_apis/consoleplugin-console-openshift-io-v1 |
25.5. Configuring a Fibre Channel over Ethernet Interface | 25.5. Configuring a Fibre Channel over Ethernet Interface Setting up and deploying a Fibre Channel over Ethernet (FCoE) interface requires two packages: fcoe-utils lldpad Once these packages are installed, perform the following procedure to enable FCoE over a virtual LAN (VLAN): Procedure 25.10. Configuring an Ethernet Interface to Use FCoE To configure a new VLAN, make a copy of an existing network script, for example /etc/fcoe/cfg-eth0 , and change the name to the Ethernet device that supports FCoE. This provides you with a default file to configure. Given that the FCoE device is eth X , run: Modify the contents of cfg-eth X as needed. Notably, set DCB_REQUIRED to no for networking interfaces that implement a hardware Data Center Bridging Exchange (DCBX) protocol client. If you want the device to automatically load during boot time, set ONBOOT=yes in the corresponding /etc/sysconfig/network-scripts/ifcfg-eth X file. For example, if the FCoE device is eth2, edit /etc/sysconfig/network-scripts/ifcfg-eth2 accordingly. Start the data center bridging daemon ( dcbd ) by running: For networking interfaces that implement a hardware DCBX client, skip this step. For interfaces that require a software DCBX client, enable data center bridging on the Ethernet interface by running: Then, enable FCoE on the Ethernet interface by running: Note that these commands only work if the dcbd settings for the Ethernet interface were not changed. Load the FCoE device now using: Start FCoE using: The FCoE device appears soon if all other settings on the fabric are correct. To view configured FCoE devices, run: After correctly configuring the Ethernet interface to use FCoE, Red Hat recommends that you set FCoE and the lldpad service to run at startup. To do so, use the systemctl utility: Note Running the # systemctl stop fcoe command stops the daemon, but does not reset the configuration of FCoE interfaces. To do so, run the # systemctl -s SIGHUP kill fcoe command. As of Red Hat Enterprise Linux 7, Network Manager has the ability to query and set the DCB settings of a DCB capable Ethernet interface. | [
"cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-eth X",
"systemctl start lldpad",
"dcbtool sc eth X dcb on",
"dcbtool sc eth X app:fcoe e:1",
"ip link set dev eth X up",
"systemctl start fcoe",
"fcoeadm -i",
"systemctl enable lldpad",
"systemctl enable fcoe"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/fcoe-config |
Preface | Preface Standalone Manager installation is manual and customizable. You must install a Red Hat Enterprise Linux machine, then run the configuration script ( engine-setup ) and provide information about how you want to configure the Red Hat Virtualization Manager. Add hosts and storage after the Manager is running. At least two hosts are required for virtual machine high availability. To install the Manager with a remote Manager database, manually create the database on the remote machine before running engine-setup . To install the Data Warehouse database on a remote machine, run the Data Warehouse configuration script ( ovirt-engine-dwh-setup ) on the remote machine. This script installs the Data Warehouse service and can create the Data Warehouse database automatically. See the Planning and Prerequisites Guide for information on environment options and recommended configuration. Red Hat Virtualization Key Components Component Name Description Red Hat Virtualization Manager A service that provides a graphical user interface and a REST API to manage the resources in the environment. The Manager is installed on a physical or virtual machine running Red Hat Enterprise Linux. Hosts Red Hat Enterprise Linux hosts (RHEL hosts) and Red Hat Virtualization Hosts (image-based hypervisors) are the two supported types of host. Hosts use Kernel-based Virtual Machine (KVM) technology and provide resources used to run virtual machines. Shared Storage A storage service is used to store the data associated with virtual machines. Data Warehouse A service that collects configuration information and statistical data from the Manager. Standalone Manager Architecture The Red Hat Virtualization Manager runs on a physical server, or a virtual machine hosted in a separate virtualization environment. A standalone Manager is easier to deploy and manage, but requires an additional physical server. The Manager is only highly available when managed externally with a product such as Red Hat's High Availability Add-On. The minimum setup for a standalone Manager environment includes: One Red Hat Virtualization Manager machine. The Manager is typically deployed on a physical server. However, it can also be deployed on a virtual machine, as long as that virtual machine is hosted in a separate environment. The Manager must run on Red Hat Enterprise Linux 8. A minimum of two hosts for virtual machine high availability. You can use Red Hat Enterprise Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent) runs on all hosts to facilitate communication with the Red Hat Virtualization Manager. One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts. Figure 1. Standalone Manager Red Hat Virtualization Architecture | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/pr01 |
Chapter 6. Investigating the operators of the RHOSO High Availability services | Chapter 6. Investigating the operators of the RHOSO High Availability services You can use the following command to list the operators of your Red Hat OpenStack Services on OpenShift (RHOSO) environment to obtain the full names of the operators of the RHOSO High Availability services: Note When using these operators you can abbreviate the operator name by excluding the .openstack-operators portion of the name. For example, you can use the infra-operator to manage the memcached service. You can use the full name of an operator to retrieve its custom resource definition (CRD), by first using the following command to obtain the name of the CRD of the required operator: Replace <operator-name> with the full name of the required operator. This example obtains the name of the CRD of the Galera operator: Then you can use the following command to describe the CRD of this operator: Replace <operator-crd-name> with the full name of the CRD of required operator obtained from the command. This example describes the CRD of the Galera operator: For more information about Red Hat OpenShift Container Platform (RHOCP) operators, see What are Operators? | [
"oc get operators NAME AGE infra-operator.openstack-operators 9h mariadb-operator.openstack-operators 9h rabbitmq-cluster-operator.openstack-operators 9h",
"oc describe operator/<operator-name>",
"oc describe operator/mariadb-operator.openstack-operators |less Status: Components: Kind: CustomResourceDefinition Name: galeras.mariadb.openstack.org",
"oc describe crd/<operator-crd-name>",
"oc describe crd/galeras.mariadb.openstack.org Name: galeras.mariadb.openstack.org Namespace: Labels: operators.coreos.com/mariadb-operator.openstack-operators= Annotations: controller-gen.kubebuilder.io/version: v0.11.1 operatorframework.io/installed-alongside-96a31840a95472ca: openstack-operators/mariadb-operator.v0.0.1 API Version: apiextensions.k8s.io/v1 Kind: CustomResourceDefinition Metadata: Creation Timestamp: 2024-03-21T22:08:06Z Generation: 1 Resource Version: 64637 UID: f68caee7-b4ec-4713-8095-c4ee9b1fd13e Spec: ."
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/monitoring_high_availability_services/proc_investigating-the-rhoso-high-availability-operators_ha-monitoring |
Release notes for the Red Hat build of Cryostat 3.0 | Release notes for the Red Hat build of Cryostat 3.0 Red Hat build of Cryostat 3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/release_notes_for_the_red_hat_build_of_cryostat_3.0/index |
Chapter 9. ConsoleYAMLSample [console.openshift.io/v1] | Chapter 9. ConsoleYAMLSample [console.openshift.io/v1] Description ConsoleYAMLSample is an extension for customizing OpenShift web console YAML samples. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required metadata spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleYAMLSampleSpec is the desired YAML sample configuration. Samples will appear with their descriptions in a samples sidebar when creating a resources in the web console. 9.1.1. .spec Description ConsoleYAMLSampleSpec is the desired YAML sample configuration. Samples will appear with their descriptions in a samples sidebar when creating a resources in the web console. Type object Required description targetResource title yaml Property Type Description description string description of the YAML sample. snippet boolean snippet indicates that the YAML sample is not the full YAML resource definition, but a fragment that can be inserted into the existing YAML document at the user's cursor. targetResource object targetResource contains apiVersion and kind of the resource YAML sample is representating. title string title of the YAML sample. yaml string yaml is the YAML sample to display. 9.1.2. .spec.targetResource Description targetResource contains apiVersion and kind of the resource YAML sample is representating. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 9.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleyamlsamples DELETE : delete collection of ConsoleYAMLSample GET : list objects of kind ConsoleYAMLSample POST : create a ConsoleYAMLSample /apis/console.openshift.io/v1/consoleyamlsamples/{name} DELETE : delete a ConsoleYAMLSample GET : read the specified ConsoleYAMLSample PATCH : partially update the specified ConsoleYAMLSample PUT : replace the specified ConsoleYAMLSample 9.2.1. /apis/console.openshift.io/v1/consoleyamlsamples HTTP method DELETE Description delete collection of ConsoleYAMLSample Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleYAMLSample Table 9.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSampleList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleYAMLSample Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body ConsoleYAMLSample schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 201 - Created ConsoleYAMLSample schema 202 - Accepted ConsoleYAMLSample schema 401 - Unauthorized Empty 9.2.2. /apis/console.openshift.io/v1/consoleyamlsamples/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the ConsoleYAMLSample HTTP method DELETE Description delete a ConsoleYAMLSample Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleYAMLSample Table 9.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleYAMLSample Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleYAMLSample Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body ConsoleYAMLSample schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 201 - Created ConsoleYAMLSample schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/console_apis/consoleyamlsample-console-openshift-io-v1 |
Chapter 15. Tutorial: Assigning a consistent egress IP for external traffic | Chapter 15. Tutorial: Assigning a consistent egress IP for external traffic You can assign a consistent IP address for traffic that leaves your cluster such as security groups which require an IP-based configuration to meet security standards. By default, Red Hat OpenShift Service on AWS (ROSA) uses the OVN-Kubernetes container network interface (CNI) to assign random IP addresses from a pool. This can make configuring security lockdowns unpredictable or open. See Configuring an egress IP address for more information. Objectives Learn how to configure a set of predictable IP addresses for egress cluster traffic. Prerequisites A ROSA cluster deployed with OVN-Kubernetes The OpenShift CLI ( oc ) The ROSA CLI ( rosa ) jq 15.1. Setting your environment variables Set your environment variables by running the following command: Note Replace the value of the ROSA_MACHINE_POOL_NAME variable to target a different machine pool. USD export ROSA_CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}USD//') USD export ROSA_MACHINE_POOL_NAME=worker 15.2. Ensuring capacity The number of IP addresses assigned to each node is limited for each public cloud provider. Verify sufficient capacity by running the following command: USD oc get node -o json | \ jq '.items[] | { "name": .metadata.name, "ips": (.status.addresses | map(select(.type == "InternalIP") | .address)), "capacity": (.metadata.annotations."cloud.network.openshift.io/egress-ipconfig" | fromjson[] | .capacity.ipv4) }' Example output --- { "name": "ip-10-10-145-88.ec2.internal", "ips": [ "10.10.145.88" ], "capacity": 14 } { "name": "ip-10-10-154-175.ec2.internal", "ips": [ "10.10.154.175" ], "capacity": 14 } --- 15.3. Creating the egress IP rules Before creating the egress IP rules, identify which egress IPs you will use. Note The egress IPs that you select should exist as a part of the subnets in which the worker nodes are provisioned. Optional : Reserve the egress IPs that you requested to avoid conflicts with the AWS Virtual Private Cloud (VPC) Dynamic Host Configuration Protocol (DHCP) service. Request explicit IP reservations on the AWS documentation for CIDR reservations page. 15.4. Assigning an egress IP to a namespace Create a new project by running the following command: USD oc new-project demo-egress-ns Create the egress rule for all pods within the namespace by running the following command: USD cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-ns spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.253 - 10.10.150.253 - 10.10.200.253 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-ns EOF 15.5. Assigning an egress IP to a pod Create a new project by running the following command: USD oc new-project demo-egress-pod Create the egress rule for the pod by running the following command: Note spec.namespaceSelector is a mandatory field. USD cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-pod spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.254 - 10.10.150.254 - 10.10.200.254 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-pod podSelector: matchLabels: run: demo-egress-pod EOF 15.5.1. Labeling the nodes Obtain your pending egress IP assignments by running the following command: USD oc get egressips Example output NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 demo-egress-pod 10.10.100.254 The egress IP rule that you created only applies to nodes with the k8s.ovn.org/egress-assignable label. Make sure that the label is only on a specific machine pool. Assign the label to your machine pool using the following command: Warning If you rely on node labels for your machine pool, this command will replace those labels. Be sure to input your desired labels into the --labels field to ensure your node labels remain. USD rosa update machinepool USD{ROSA_MACHINE_POOL_NAME} \ --cluster="USD{ROSA_CLUSTER_NAME}" \ --labels "k8s.ovn.org/egress-assignable=" 15.5.2. Reviewing the egress IPs Review the egress IP assignments by running the following command: USD oc get egressips Example output NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 ip-10-10-156-122.ec2.internal 10.10.150.253 demo-egress-pod 10.10.100.254 ip-10-10-156-122.ec2.internal 10.10.150.254 15.6. Verification 15.6.1. Deploying a sample application To test the egress IP rule, create a service that is restricted to the egress IP addresses which we have specified. This simulates an external service that is expecting a small subset of IP addresses. Run the echoserver command to replicate a request: USD oc -n default run demo-service --image=gcr.io/google_containers/echoserver:1.4 Expose the pod as a service and limit the ingress to the egress IP addresses you specified by running the following command: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: demo-service namespace: default annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" service.beta.kubernetes.io/aws-load-balancer-internal: "true" spec: selector: run: demo-service ports: - port: 80 targetPort: 8080 type: LoadBalancer externalTrafficPolicy: Local # NOTE: this limits the source IPs that are allowed to connect to our service. It # is being used as part of this demo, restricting connectivity to our egress # IP addresses only. # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. loadBalancerSourceRanges: - 10.10.100.254/32 - 10.10.150.254/32 - 10.10.200.254/32 - 10.10.100.253/32 - 10.10.150.253/32 - 10.10.200.253/32 EOF Retrieve the load balancer hostname and save it as an environment variable by running the following command: USD export LOAD_BALANCER_HOSTNAME=USD(oc get svc -n default demo-service -o json | jq -r '.status.loadBalancer.ingress[].hostname') 15.6.2. Testing the namespace egress Start an interactive shell to test the namespace egress rule: USD oc run \ demo-egress-ns \ -it \ --namespace=demo-egress-ns \ --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bash Send a request to the load balancer and ensure that you can successfully connect: USD curl -s http://USDLOAD_BALANCER_HOSTNAME Check the output for a successful connection: Note The client_address is the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to .spec.loadBalancerSourceRanges . Example output CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request- Exit the pod by running the following command: USD exit 15.6.3. Testing the pod egress Start an interactive shell to test the pod egress rule: USD oc run \ demo-egress-pod \ -it \ --namespace=demo-egress-pod \ --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bash Send a request to the load balancer by running the following command: USD curl -s http://USDLOAD_BALANCER_HOSTNAME Check the output for a successful connection: Note The client_address is the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to .spec.loadBalancerSourceRanges . Example output CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request- Exit the pod by running the following command: USD exit 15.6.4. Optional: Testing blocked egress Optional: Test that the traffic is successfully blocked when the egress rules do not apply by running the following command: USD oc run \ demo-egress-pod-fail \ -it \ --namespace=demo-egress-pod \ --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME \ --image=registry.access.redhat.com/ubi9/ubi -- \ bash Send a request to the load balancer by running the following command: USD curl -s http://USDLOAD_BALANCER_HOSTNAME If the command is unsuccessful, egress is successfully blocked. Exit the pod by running the following command: USD exit 15.7. Cleaning up your cluster Clean up your cluster by running the following commands: USD oc delete svc demo-service -n default; \ USD oc delete pod demo-service -n default; \ USD oc delete project demo-egress-ns; \ USD oc delete project demo-egress-pod; \ USD oc delete egressip demo-egress-ns; \ USD oc delete egressip demo-egress-pod Clean up the assigned node labels by running the following command: Warning If you rely on node labels for your machine pool, this command replaces those labels. Input your desired labels into the --labels field to ensure your node labels remain. USD rosa update machinepool USD{ROSA_MACHINE_POOL_NAME} \ --cluster="USD{ROSA_CLUSTER_NAME}" \ --labels "" | [
"export ROSA_CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//') export ROSA_MACHINE_POOL_NAME=worker",
"oc get node -o json | jq '.items[] | { \"name\": .metadata.name, \"ips\": (.status.addresses | map(select(.type == \"InternalIP\") | .address)), \"capacity\": (.metadata.annotations.\"cloud.network.openshift.io/egress-ipconfig\" | fromjson[] | .capacity.ipv4) }'",
"--- { \"name\": \"ip-10-10-145-88.ec2.internal\", \"ips\": [ \"10.10.145.88\" ], \"capacity\": 14 } { \"name\": \"ip-10-10-154-175.ec2.internal\", \"ips\": [ \"10.10.154.175\" ], \"capacity\": 14 } ---",
"oc new-project demo-egress-ns",
"cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-ns spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.253 - 10.10.150.253 - 10.10.200.253 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-ns EOF",
"oc new-project demo-egress-pod",
"cat <<EOF | oc apply -f - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: demo-egress-pod spec: # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. egressIPs: - 10.10.100.254 - 10.10.150.254 - 10.10.200.254 namespaceSelector: matchLabels: kubernetes.io/metadata.name: demo-egress-pod podSelector: matchLabels: run: demo-egress-pod EOF",
"oc get egressips",
"NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 demo-egress-pod 10.10.100.254",
"rosa update machinepool USD{ROSA_MACHINE_POOL_NAME} --cluster=\"USD{ROSA_CLUSTER_NAME}\" --labels \"k8s.ovn.org/egress-assignable=\"",
"oc get egressips",
"NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS demo-egress-ns 10.10.100.253 ip-10-10-156-122.ec2.internal 10.10.150.253 demo-egress-pod 10.10.100.254 ip-10-10-156-122.ec2.internal 10.10.150.254",
"oc -n default run demo-service --image=gcr.io/google_containers/echoserver:1.4",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: demo-service namespace: default annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: \"internal\" service.beta.kubernetes.io/aws-load-balancer-internal: \"true\" spec: selector: run: demo-service ports: - port: 80 targetPort: 8080 type: LoadBalancer externalTrafficPolicy: Local # NOTE: this limits the source IPs that are allowed to connect to our service. It # is being used as part of this demo, restricting connectivity to our egress # IP addresses only. # NOTE: these egress IPs are within the subnet range(s) in which my worker nodes # are deployed. loadBalancerSourceRanges: - 10.10.100.254/32 - 10.10.150.254/32 - 10.10.200.254/32 - 10.10.100.253/32 - 10.10.150.253/32 - 10.10.200.253/32 EOF",
"export LOAD_BALANCER_HOSTNAME=USD(oc get svc -n default demo-service -o json | jq -r '.status.loadBalancer.ingress[].hostname')",
"oc run demo-egress-ns -it --namespace=demo-egress-ns --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME --image=registry.access.redhat.com/ubi9/ubi -- bash",
"curl -s http://USDLOAD_BALANCER_HOSTNAME",
"CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request-",
"exit",
"oc run demo-egress-pod -it --namespace=demo-egress-pod --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME --image=registry.access.redhat.com/ubi9/ubi -- bash",
"curl -s http://USDLOAD_BALANCER_HOSTNAME",
"CLIENT VALUES: client_address=10.10.207.247 command=GET real path=/ query=nil request_version=1.1 request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com user-agent=curl/7.76.1 BODY: -no body in request-",
"exit",
"oc run demo-egress-pod-fail -it --namespace=demo-egress-pod --env=LOAD_BALANCER_HOSTNAME=USDLOAD_BALANCER_HOSTNAME --image=registry.access.redhat.com/ubi9/ubi -- bash",
"curl -s http://USDLOAD_BALANCER_HOSTNAME",
"exit",
"oc delete svc demo-service -n default; oc delete pod demo-service -n default; oc delete project demo-egress-ns; oc delete project demo-egress-pod; oc delete egressip demo-egress-ns; oc delete egressip demo-egress-pod",
"rosa update machinepool USD{ROSA_MACHINE_POOL_NAME} --cluster=\"USD{ROSA_CLUSTER_NAME}\" --labels \"\""
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/tutorials/cloud-experts-consistent-egress-ip |
Chapter 2. Nagios Core installation and configuration | Chapter 2. Nagios Core installation and configuration As a storage administrator, you can install Nagios Core by downloading the Nagios Core source code; then, configuring, making, and installing it on the node that will run the Nagios Core instance. 2.1. Installing and configuring the Nagios Core server from source There is not a Red Hat Enterprise Linux package for the Nagios Core software, so the Nagios Core software must be compiled from source. Prerequisites Internet access. Root-level access to the Nagios Core host. Procedure Install the prerequisites: Example If you are using a firewall, open port 80 for httpd : Example Create a user and group for Nagios Core: Example Download the latest version of Nagios Core and Plug-ins: Example Run ./configure : Example Compile the Nagios Core source code: Example Install Nagios source code: Example Copy the event handlers and change their ownership: Example Run the pre-flight check: Example Make and install the Nagios Core plug-ins: Example Create a user for the Nagios Core user interface: Example Important If adding a user other than nagiosadmin , ensure the /usr/local/nagios/etc/cgi.cfg file gets updated with the user name too. Modify the /usr/local/nagios/etc/objects/contacts.cfg file with the user name, full name, and email address as needed. 2.2. Starting the Nagios Core service Start the Nagios Core service to monitor the Red Hat Ceph Storage cluster health. Prerequisites Root-level access to the Nagios Core host. Procedure Add Nagios Core and Apache as a service: Example Start the Nagios Core daemon and Apache: Example 2.3. Logging into the Nagios Core server Log in to the Nagios Core server to view the health status of the Red Hat Ceph Storage cluster. Prerequisites User name and password for the Nagios dashboard. Procedure With Nagios up and running, log in to the dashboard using the credentials of the default Nagios Core user: Syntax Replace IP_ADDRESS with the IP address of your Nagios Core server. | [
"dnf install -y httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp openssl openssl-devel wget unzip make",
"firewall-cmd --zone=public --add-port=80/tcp firewall-cmd --zone=public --add-port=80/tcp --permanent",
"useradd nagios passwd nagios groupadd nagcmd usermod -a -G nagcmd nagios usermod -a -G nagcmd apache",
"wget --inet4-only https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.4.5.tar.gz wget --inet4-only http://www.nagios-plugins.org/download/nagios-plugins-2.3.3.tar.gz tar zxf nagios-4.4.5.tar.gz tar zxf nagios-plugins-2.3.3.tar.gz cd nagios-4.4.5",
"./configure --with-command-group=nagcmd",
"make all",
"make install make install-init make install-config make install-commandmode make install-webconf",
"cp -R contrib/eventhandlers/ /usr/local/nagios/libexec/ chown -R nagios:nagios /usr/local/nagios/libexec/eventhandlers",
"/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg",
"cd ../nagios-plugins-2.3.3 ./configure --with-nagios-user=nagios --with-nagios-group=nagios make make install",
"htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin",
"systemctl enable nagios systemctl enable httpd",
"systemctl start nagios systemctl start httpd",
"http:// IP_ADDRESS /nagios"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/monitoring_ceph_with_nagios_guide/nagios-core-installation-and-configuration |
Chapter 2. Protect a service application by using OpenID Connect (OIDC) Bearer token authentication | Chapter 2. Protect a service application by using OpenID Connect (OIDC) Bearer token authentication Use the Quarkus OpenID Connect (OIDC) extension to secure a Jakarta REST application with Bearer token authentication. The bearer tokens are issued by OIDC and OAuth 2.0 compliant authorization servers, such as Keycloak . For more information about OIDC Bearer token authentication, see the Quarkus OpenID Connect (OIDC) Bearer token authentication guide. If you want to protect web applications by using OIDC Authorization Code Flow authentication, see the OpenID Connect authorization code flow mechanism for protecting web applications guide. 2.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.9.6 A working container runtime (Docker or Podman ) Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) The jq command-line processor tool 2.2. Architecture This example shows how you can build a simple microservice that offers two endpoints: /api/users/me /api/admin These endpoints are protected and can only be accessed if a client sends a bearer token along with the request, which must be valid (for example, signature, expiration, and audience) and trusted by the microservice. A Keycloak server issues the bearer token and represents the subject for which the token was issued. Because it is an OAuth 2.0 authorization server, the token also references the client acting on the user's behalf. Any user with a valid token can access the /api/users/me endpoint. As a response, it returns a JSON document with user details obtained from the information in the token. The /api/admin endpoint is protected with RBAC (Role-Based Access Control), which only users with the admin role can access. At this endpoint, the @RolesAllowed annotation is used to enforce the access constraint declaratively. 2.3. Solution Follow the instructions in the sections and create the application step by step. You can also go straight to the completed example. You can clone the Git repository by running the command git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.8 , or you can download an archive . The solution is located in the security-openid-connect-quickstart directory . 2.4. Create the Maven project You can either create a new Maven project with the oidc extension or you can add the extension to an existing Maven project. Complete one of the following commands: To create a new Maven project, use the following command: Using the Quarkus CLI: quarkus create app org.acme:security-openid-connect-quickstart \ --extension='oidc,resteasy-reactive-jackson' \ --no-code cd security-openid-connect-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn io.quarkus.platform:quarkus-maven-plugin:3.8.5:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-openid-connect-quickstart \ -Dextensions='oidc,resteasy-reactive-jackson' \ -DnoCode cd security-openid-connect-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-openid-connect-quickstart" If you already have your Quarkus project configured, you can add the oidc extension to your project by running the following command in your project base directory: Using the Quarkus CLI: quarkus extension add oidc Using Maven: ./mvnw quarkus:add-extension -Dextensions='oidc' Using Gradle: ./gradlew addExtension --extensions='oidc' This will add the following to your build file: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-oidc") 2.5. Write the application Implement the /api/users/me endpoint as shown in the following example, which is a regular Jakarta REST resource: package org.acme.security.openid.connect; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.jboss.resteasy.reactive.NoCache; import io.quarkus.security.identity.SecurityIdentity; @Path("/api/users") public class UsersResource { @Inject SecurityIdentity securityIdentity; @GET @Path("/me") @RolesAllowed("user") @NoCache public User me() { return new User(securityIdentity); } public static class User { private final String userName; User(SecurityIdentity securityIdentity) { this.userName = securityIdentity.getPrincipal().getName(); } public String getUserName() { return userName; } } } Implement the /api/admin endpoint as shown in the following example: package org.acme.security.openid.connect; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/api/admin") public class AdminResource { @GET @RolesAllowed("admin") @Produces(MediaType.TEXT_PLAIN) public String admin() { return "granted"; } } Note The main difference in this example is that the @RolesAllowed annotation is used to verify that only users granted the admin role can access the endpoint. Injection of the SecurityIdentity is supported in both @RequestScoped and @ApplicationScoped contexts. 2.6. Configure the application Configure the Quarkus OpenID Connect (OIDC) extension by setting the following configuration properties in the src/main/resources/application.properties file. %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=backend-service quarkus.oidc.credentials.secret=secret # Tell Dev Services for Keycloak to import the realm file # This property is not effective when running the application in JVM or native modes quarkus.keycloak.devservices.realm-path=quarkus-realm.json Where: %prod.quarkus.oidc.auth-server-url sets the base URL of the OpenID Connect (OIDC) server. The %prod. profile prefix ensures that Dev Services for Keycloak launches a container when you run the application in development (dev) mode. For more information, see the Run the application in dev mode section. quarkus.oidc.client-id sets a client id that identifies the application. quarkus.oidc.credentials.secret sets the client secret, which is used by the client_secret_basic authentication method. For more information, see the Quarkus OpenID Connect (OIDC) configuration properties guide. 2.7. Start and configure the Keycloak server Put the realm configuration file on the classpath ( target/classes directory) so that it gets imported automatically when running in dev mode. You do not need to do this if you have already built a complete solution , in which case, this realm file is added to the classpath during the build. Note Do not start the Keycloak server when you run the application in dev mode; Dev Services for Keycloak will start a container. For more information, see the Run the application in dev mode section. To start a Keycloak server, you can use Docker to run the following command: docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev Where the keycloak.version is set to version 24.0.0 or later. You can access your Keycloak server at localhost:8180 . To access the Keycloak Administration console, log in as the admin user by using the following login credentials: Username: admin Password: admin Import the realm configuration file from the upstream community repository to create a new realm. For more information, see the Keycloak documentation about creating and configuring a new realm . 2.8. Run the application in dev mode To run the application in dev mode, run the following commands: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev Dev Services for Keycloak will start a Keycloak container and import a quarkus-realm.json . Open a Dev UI , which you can find at /q/dev-ui . Then, in an OpenID Connect card, click the Keycloak provider link . When prompted to log in to a Single Page Application provided by OpenID Connect Dev UI , do the following steps: Log in as alice (password: alice ), who has a user role. Accessing /api/admin returns a 403 status code. Accessing /api/users/me returns a 200 status code. Log out and log in again as admin (password: admin ), who has both admin and user roles. Accessing /api/admin returns a 200 status code. Accessing /api/users/me returns a 200 status code. 2.9. Run the Application in JVM mode When you are done with dev mode, you can run the application as a standard Java application. Compile the application: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Run the application: java -jar target/quarkus-app/quarkus-run.jar 2.10. Run the application in native mode You can compile this same demo as-is into native mode without any modifications. This implies that you no longer need to install a JVM on your production environment. The runtime technology is included in the produced binary and optimized to run with minimal resources required. Compilation takes a bit longer, so this step is disabled by default. Build your application again by enabling the native profile: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.package.type=native After waiting a little while, you run the following binary directly: ./target/security-openid-connect-quickstart-1.0.0-SNAPSHOT-runner 2.11. Test the application For information about testing your application in dev mode, see the preceding Run the application in dev mode section. You can test the application launched in JVM or native modes with curl . Because the application uses Bearer token authentication, you must first obtain an access token from the Keycloak server to access the application resources: export access_token=USD(\ curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token \ --user backend-service:secret \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' \ ) The preceding example obtains an access token for the user alice . Any user can access the http://localhost:8080/api/users/me endpoint, which returns a JSON payload with details about the user. curl -v -X GET \ http://localhost:8080/api/users/me \ -H "Authorization: Bearer "USDaccess_token Only users with the admin role can access the http://localhost:8080/api/admin endpoint. If you try to access this endpoint with the previously-issued access token, you get a 403 response from the server. curl -v -X GET \ http://localhost:8080/api/admin \ -H "Authorization: Bearer "USDaccess_token To access the admin endpoint, obtain a token for the admin user: export access_token=USD(\ curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token \ --user backend-service:secret \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' \ ) For information about writing integration tests that depend on Dev Services for Keycloak , see the Dev Services for Keycloak section of the "OpenID Connect (OIDC) Bearer token authentication" guide. 2.12. References OIDC configuration properties OpenID Connect (OIDC) Bearer token authentication Keycloak Documentation OpenID Connect JSON Web Token OpenID Connect and OAuth2 Client and Filters Reference Guide Dev Services for Keycloak Sign and encrypt JWT tokens with SmallRye JWT Build Combining authentication mechanisms Quarkus Security overview | [
"quarkus create app org.acme:security-openid-connect-quickstart --extension='oidc,resteasy-reactive-jackson' --no-code cd security-openid-connect-quickstart",
"mvn io.quarkus.platform:quarkus-maven-plugin:3.8.5:create -DprojectGroupId=org.acme -DprojectArtifactId=security-openid-connect-quickstart -Dextensions='oidc,resteasy-reactive-jackson' -DnoCode cd security-openid-connect-quickstart",
"quarkus extension add oidc",
"./mvnw quarkus:add-extension -Dextensions='oidc'",
"./gradlew addExtension --extensions='oidc'",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency>",
"implementation(\"io.quarkus:quarkus-oidc\")",
"package org.acme.security.openid.connect; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.jboss.resteasy.reactive.NoCache; import io.quarkus.security.identity.SecurityIdentity; @Path(\"/api/users\") public class UsersResource { @Inject SecurityIdentity securityIdentity; @GET @Path(\"/me\") @RolesAllowed(\"user\") @NoCache public User me() { return new User(securityIdentity); } public static class User { private final String userName; User(SecurityIdentity securityIdentity) { this.userName = securityIdentity.getPrincipal().getName(); } public String getUserName() { return userName; } } }",
"package org.acme.security.openid.connect; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/api/admin\") public class AdminResource { @GET @RolesAllowed(\"admin\") @Produces(MediaType.TEXT_PLAIN) public String admin() { return \"granted\"; } }",
"%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=backend-service quarkus.oidc.credentials.secret=secret Tell Dev Services for Keycloak to import the realm file This property is not effective when running the application in JVM or native modes quarkus.keycloak.devservices.realm-path=quarkus-realm.json",
"docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev",
"quarkus dev",
"./mvnw quarkus:dev",
"./gradlew --console=plain quarkusDev",
"quarkus build",
"./mvnw install",
"./gradlew build",
"java -jar target/quarkus-app/quarkus-run.jar",
"quarkus build --native",
"./mvnw install -Dnative",
"./gradlew build -Dquarkus.package.type=native",
"./target/security-openid-connect-quickstart-1.0.0-SNAPSHOT-runner",
"export access_token=USD( curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token --user backend-service:secret -H 'content-type: application/x-www-form-urlencoded' -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' )",
"curl -v -X GET http://localhost:8080/api/users/me -H \"Authorization: Bearer \"USDaccess_token",
"curl -v -X GET http://localhost:8080/api/admin -H \"Authorization: Bearer \"USDaccess_token",
"export access_token=USD( curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token --user backend-service:secret -H 'content-type: application/x-www-form-urlencoded' -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' )"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/openid_connect_oidc_authentication/security-oidc-bearer-token-authentication-tutorial |
2.9. Troubleshooting SSSD | 2.9. Troubleshooting SSSD For details about troubleshooting SSSD, see the Troubleshooting SSSD appendix in the System-Level Authentication Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/troubleshooting-sssd |
Chapter 1. Schedule and quota APIs | Chapter 1. Schedule and quota APIs 1.1. AppliedClusterResourceQuota [quota.openshift.io/v1] Description AppliedClusterResourceQuota mirrors ClusterResourceQuota at a project scope, for projection into a project. It allows a project-admin to know which ClusterResourceQuotas are applied to his project and their associated usage. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ClusterResourceQuota [quota.openshift.io/v1] Description ClusterResourceQuota mirrors ResourceQuota at a cluster scope. This object is easily convertible to synthetic ResourceQuota object to allow quota evaluation re-use. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. FlowSchema [flowcontrol.apiserver.k8s.io/v1beta1] Description FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a "flow distinguisher". Type object 1.4. LimitRange [v1] Description LimitRange sets resource usage limits for each kind of resource in a Namespace. Type object 1.5. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object 1.6. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta1] Description PriorityLevelConfiguration represents the configuration of a priority level. Type object 1.7. ResourceQuota [v1] Description ResourceQuota sets aggregate quota restrictions enforced per namespace Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/schedule_and_quota_apis/schedule-and-quota-apis |
Chapter 4. Deploying the configured back ends | Chapter 4. Deploying the configured back ends To deploy the configured back ends, complete the following steps: Procedure Log in as the stack user. Run the following command to deploy the custom back end configuration: Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. For more information, see Modifying the overcloud environment in the Installing and managing Red Hat OpenStack Platform with director guide. | [
"openstack overcloud deploy --templates -e /home/stack/templates/custom-env.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_custom_block_storage_back_end/proc_deploying-configured-back-ends_custom-cinder-back-end |
8.14. cjkuni-fonts | 8.14. cjkuni-fonts 8.14.1. RHBA-2013:0962 - cjkuni-fonts bug fix update Updated cjkuni-fonts packages that fix one bug are now available. CJK Unifonts are Unicode TrueType fonts derived from original fonts made available by Arphic Technology under the Arphic Public License and extended by the CJK Unifonts project. Bug Fix BZ# 651651 Previously, under some configurations, the KDE startup menu did not show any Chinese characters in Chinese locales (both zh-CN and zh-TW), while Japanese and Korean did not have this problem. With this update, the KDE startup menu now displays Chinese characters in Chinese locales. Users of cjkuni-fonts are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/cjkuni-fonts |
Chapter 14. Execution environments | Chapter 14. Execution environments Unlike legacy virtual environments, execution environments are container images that make it possible to incorporate system-level dependencies and collection-based content. Each execution environment enables you to have a customized image to run jobs and has only what is necessary when running the job. 14.1. Building an execution environment If your Ansible content depends on custom virtual environments instead of a default environment, you must take additional steps. You must install packages on each node, interact well with other software installed on the host system, and keep them in synchronization. To simplify this process, you can build container images that serve as Ansible Control nodes . These container images are referred to as automation execution environments, which you can create with ansible-builder. Ansible-runner can then make use of those images. 14.1.1. Install ansible-builder To build images, you must have Podman or Docker installed, along with the ansible-builder Python package. The --container-runtime option must correspond to the Podman or Docker executable you intend to use. For more information, see Quickstart for Ansible Builder , or Creating and consuming execution environments . 14.1.2. Content needed for an execution environment Ansible-builder is used to create an execution environment. An execution environment must contain: Ansible Ansible Runner Ansible Collections Python and system dependencies of: modules or plugins in collections content in ansible-base custom user needs Building a new execution environment involves a definition that specifies which content you want to include in your execution environment, such as collections, Python requirements, and system-level packages. The definition must be a .yml file The content from the output generated from migrating to execution environments has some of the required data that can be piped to a file or pasted into this definition file. Additional resources For more information, see Migrate legacy venvs to execution environments . If you did not migrate from a virtual environment, you can create a definition file with the required data described in the Execution Environment Setup Reference . Collection developers can declare requirements for their content by providing the appropriate metadata. For more information, see Dependencies . 14.1.3. Example YAML file to build an image The ansible-builder build command takes an execution environment definition as an input. It outputs the build context necessary for building an execution environment image, and then builds that image. The image can be re-built with the build context elsewhere, and produces the same result. By default, the builder searches for a file named execution-environment.yml in the current directory. The following example execution-environment.yml file can be used as a starting point: --- version: 3 dependencies: galaxy: requirements.yml The content of requirements.yml : --- collections: - name: awx.awx To build an execution environment using the preceding files and run the following command: ansible-builder build ... STEP 7: COMMIT my-awx-ee --> 09c930f5f6a 09c930f5f6ac329b7ddb321b144a029dbbfcc83bdfc77103968b7f6cdfc7bea2 Complete! The build context can be found at: context In addition to producing a ready-to-use container image, the build context is preserved. This can be rebuilt at a different time or location with the tools of your choice, such as docker build or podman build . Additional resources For additional information about the ansible-builder build command, see Ansible's CLI Usage documentation. 14.1.4. Execution environment mount options Rebuilding an execution environment is one way to add certificates, but inheriting certificates from the host provides a more convenient solution. For VM-based installations, automation controller automatically mounts the system truststore in the execution environment when jobs run. You can customize execution environment mount options and mount paths in the Paths to expose to isolated jobs field of the Job Settings page, where Podman-style volume mount syntax is supported. Additional resources For more information, see the Podman documentation . 14.1.4.1. Troubleshooting execution environment mount options In some cases where the /etc/ssh/* files were added to the execution environment image due to customization of an execution environment, an SSH error can occur. For example, exposing the /etc/ssh/ssh_config.d:/etc/ssh/ssh_config.d:O path enables the container to be mounted, but the ownership permissions are not mapped correctly. Use the following procedure if you meet this error, or have upgraded from an older version of automation controller: Procedure Change the container ownership on the mounted volume to root . From the navigation panel, select Settings . Select Jobs settings from the Jobs option. Expose the path in the Paths to expose to isolated jobs field, using the current example: Note The :O option is only supported for directories. Be as detailed as possible, especially when specifying system paths. Mounting /etc or /usr directly has an impact that makes it difficult to troubleshoot. This informs Podman to run a command similar to the following example, where the configuration is mounted and the ssh command works as expected: podman run -v /ssh_config:/etc/ssh/ssh_config.d/:O ... To expose isolated paths in OpenShift or Kubernetes containers as HostPath, use the following configuration: Set Expose host paths for Container Groups to On to enable it. When the playbook runs, the resulting Pod specification is similar to the following example. Note the details of the volumeMounts and volumes sections. 14.1.4.2. Mounting the directory in the execution node to the execution environment container With Ansible Automation Platform 2.1.2, only O and z options were available. Since Ansible Automation Platform 2.2, further options such as rw are available. This is useful when using NFS storage. Procedure From the navigation panel, select Settings . Select Jobs settings from the Jobs option. Edit the Paths to expose to isolated jobs field: Enter a list of paths that volumes are mounted from the execution node or the hybrid node to the container. Enter one path per line. The supported format is HOST-DIR[:CONTAINER-DIR[:OPTIONS] . The allowed paths are z , O , ro , and rw . Example For the rw option, configure the SELinux label correctly. For example, to mount the /foo directory, complete the following commands: sudo su mkdir /foo chmod 777 /foo semanage fcontext -a -t container_file_t "/foo(/.*)?" restorecon -vvFR /foo The awx user must be permitted to read or write in this directory at least. Set the permissions as 777 at this time. Additional resources For more information about mount volumes, see the --volume option of the podman-run(1) section of the Podman documentation. 14.2. Adding an execution environment to a job template Prerequisites An execution environment must have been created using ansible-builder as described in Build an execution environment . When an execution environment has been created, you can use it to run jobs. Use the automation controller UI to specify the execution environment to use in your job templates. Depending on whether an execution environment is made available for global use or tied to an organization, you must have the appropriate level of administrator privileges to use an execution environment in a job. Execution environments tied to an organization require Organization administrators to be able to run jobs with those execution environments. Before running a job or job template that uses an execution environment that has a credential assigned to it, ensure that the credential contains a username, host, and password. Procedure From the navigation panel, select Administration Execution Environments . Click Add to add an execution environment. Enter the appropriate details into the following fields: Name (required): Enter a name for the execution environment. Image (required): Enter the image name. The image name requires its full location (repository), the registry, image name, and version tag in the example format of quay.io/ansible/awx-ee:latestrepo/project/image-name:tag . Optional: Pull : Choose the type of pull when running jobs: Always pull container before running : Pulls the latest image file for the container. Only pull the image if not present before running : Only pulls the latest image if none is specified. Never pull container before running : Never pull the latest version of the container image. Note If you do not set a type for pull, the value defaults to Only pull the image if not present before running . Optional: Description : Optional: Organization : Assign the organization to specifically use this execution environment. To make the execution environment available for use across multiple organizations, leave this field blank. Registry credential : If the image has a protected container registry, provide the credential to access it. Click Save . Your newly added execution environment is ready to be used in a job template. To add an execution environment to a job template, specify it in the Execution Environment field of the job template, as shown in the following example: When you have added an execution environment to a job template, those templates are listed in the Templates tab of the execution environment: | [
"--- version: 3 dependencies: galaxy: requirements.yml",
"--- collections: - name: awx.awx",
"ansible-builder build STEP 7: COMMIT my-awx-ee --> 09c930f5f6a 09c930f5f6ac329b7ddb321b144a029dbbfcc83bdfc77103968b7f6cdfc7bea2 Complete! The build context can be found at: context",
"run -v /ssh_config:/etc/ssh/ssh_config.d/:O",
"[ \"/var/lib/awx/.ssh:/root/.ssh:O\" ]",
"sudo su",
"mkdir /foo",
"chmod 777 /foo",
"semanage fcontext -a -t container_file_t \"/foo(/.*)?\"",
"restorecon -vvFR /foo"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/assembly-controller-execution-environments |
Chapter 2. Updating a model | Chapter 2. Updating a model Red Hat Enterprise Linux AI allows you to upgrade LLMs that you locally downloaded to the latest version of the model. 2.1. Updating the models You can upgrade your local models to the latest version of the model using the RHEL AI tool set. Prerequisites You installed the InstructLab tools with the bootable container image. You initialized InstructLab and can use the ilab CLI. You downloaded LLMs on Red Hat Enterprise Linux AI. You created a Red Hat registry account and logged in on your machine. Procedure You can upgrade any model by running the following command. USD ilab model download --repository <repository_and_model> --release latest where: <repository_and_model> Specifies the repository location of the model as well as the model. You can access the models from the registry.redhat.io/rhelai1/ repository. <release> Specifies the version of the model. Set to latest , or a specific version of the model, for the most up to date version of the model. Verification You can view all the downloaded models on your system with the following command: USD ilab model list | [
"ilab model download --repository <repository_and_model> --release latest",
"ilab model list"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/updating/updating_a_model |
Chapter 12. Flow metrics | Chapter 12. Flow metrics This section shows the metrics available to a Prometheus instance when you enable the flow collector using skupper init --enable-flow-collector . Most metrics share a set of common labels as shown below, exceptions are shown beside appropriate metrics. Note The metrics are available from https://skupper.<namespace>.svc.clusterlocal:8010/api/v1alpha1/metrics when console-auth is set to internal (default) or unsecured . Replace <namespace>` with the appropriate namespace where the Service Interconnect is deployed. Metrics The following metrics are available to a Prometheus instance: flows_total For tcp protocol this is the total number of connections. For http or http2 protocol this is the total number of distinct requests made. octets_total The total number of bytes delivered through the service network. active_flows The number of flows currently active including open tcp connections and in flight http requests. http_requests_method_total Total number of http requests grouped by method. Additional label: The http method , for example, GET , HEAD , POST . http_requests_result_total Total number of http requests by response code. Additional label: The http response code , for example 200 , 403 , 503 . active_links The total number of links between sites. Only sourceSite and direction labels are available for these metrics. active_routers The total number of routers. No labels available for filtering. active_sites The total number of sites. No labels available for filtering. Labels The following labels are common to most of the metrics allowing you to filter and categorize the data: address The address relating to the metric. Typically, this is the service name. sourceSite The site where the flow originated for the metric. This string is a combination of the site name and the site ID. destSite The site where the flow terminated for the metric. This string is a combination of the site name and the site ID. direction The direction of flow. For traffic sent from a client to server the value is incoming . For traffic sent from a client to server the value is outgoing . protocol The protocol used by the flow, tcp , http , or http2 . sourceProcess The name of the process originating the flow. destProcess The name of the process receiving the flow. | null | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/metrics |
Chapter 5. The Redfish modules in RHEL | Chapter 5. The Redfish modules in RHEL The Redfish modules for remote management of devices are now part of the redhat.rhel_mgmt Ansible collection. With the Redfish modules, you can easily use management automation on bare-metal servers and platform hardware by getting information about the servers or control them through an Out-Of-Band (OOB) controller, using the standard HTTPS transport and JSON format. 5.1. The Redfish modules The redhat.rhel_mgmt Ansible collection provides the Redfish modules to support hardware management in Ansible over Redfish. The redhat.rhel_mgmt collection is available in the ansible-collection-redhat-rhel_mgmt package. To install it, see Installing the redhat.rhel_mgmt Collection using the CLI . The following Redfish modules are available in the redhat.rhel_mgmt collection: redfish_info : The redfish_info module retrieves information about the remote Out-Of-Band (OOB) controller such as systems inventory. redfish_command : The redfish_command module performs Out-Of-Band (OOB) controller operations like log management and user management, and power operations such as system restart, power on and off. redfish_config : The redfish_config module performs OOB controller operations such as changing OOB configuration, or setting the BIOS configuration. 5.2. Redfish modules parameters The parameters used for the Redfish modules are: redfish_info parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. redfish_command parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. redfish_config parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. bios_attributes BIOS attributes to update. 5.3. Using the redfish_info module The following example shows how to use the redfish_info module in a playbook to get information about the CPU inventory. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The ansible-collection-redhat-rhel_mgmt package is installed. The python3-pyghmi package is installed either on the control node or the managed nodes. OOB controller access details. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Get CPU inventory redhat.rhel_mgmt.redfish_info: baseuri: " <URI> " username: " <username> " password: " <password> " category: Systems command: GetCpuInventory register: result Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification When you run the playbook, Ansible returns the CPU inventory details. Additional resources /usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file 5.4. Using the redfish_command module The following example shows how to use the redfish_command module in a playbook to turn on a system. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The ansible-collection-redhat-rhel_mgmt package is installed. The python3-pyghmi package is installed either on the control node or the managed nodes. OOB controller access details. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Power on system redhat.rhel_mgmt.redfish_command: baseuri: " <URI> " username: " <username> " password: " <password> " category: Systems command: PowerOn Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification The system powers on. Additional resources /usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file 5.5. Using the redfish_config module The following example shows how to use the redfish_config module in a playbook to configure a system to boot with UEFI. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The ansible-collection-redhat-rhel_mgmt package is installed. The python3-pyghmi package is installed either on the control node or the managed nodes. OOB controller access details. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manages out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Set BootMode to UEFI redhat.rhel_mgmt.redfish_config: baseuri: " <URI> " username: " <username> " password: " <password> " category: Systems command: SetBiosAttributes bios_attributes: BootMode: Uefi Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification The system boot mode is set to UEFI. Additional resources /usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file | [
"--- - name: Manage out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Get CPU inventory redhat.rhel_mgmt.redfish_info: baseuri: \" <URI> \" username: \" <username> \" password: \" <password> \" category: Systems command: GetCpuInventory register: result",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manage out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Power on system redhat.rhel_mgmt.redfish_command: baseuri: \" <URI> \" username: \" <username> \" password: \" <password> \" category: Systems command: PowerOn",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manages out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Set BootMode to UEFI redhat.rhel_mgmt.redfish_config: baseuri: \" <URI> \" username: \" <username> \" password: \" <password> \" category: Systems command: SetBiosAttributes bios_attributes: BootMode: Uefi",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/assembly_the-redfish-modules-in-rhel_automating-system-administration-by-using-rhel-system-roles |
Using JON with AMQ Broker | Using JON with AMQ Broker Red Hat AMQ 2020.Q4 For Use with AMQ Broker 7.8 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_jon_with_amq_broker/index |
Chapter 3. Installation and update | Chapter 3. Installation and update 3.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 3.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 3.1. OpenShift Container Platform installation targets and dependencies 3.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.13 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 3.1.3. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.13, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Alibaba Cloud Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) VPC Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud (VMC) on AWS VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.13, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power IBM Z or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. 3.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.13, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 3.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 3.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue with the update or the target release. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 3.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 3.4. steps Selecting a cluster installation method and preparing it for users | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/architecture/architecture-installation |
17.16. Setting vLAN Tags | 17.16. Setting vLAN Tags virtual local area network (vLAN) tags are added using the virsh net-edit command. This tag can also be used with PCI device assignment with SR-IOV devices. For more information, see Section 16.2.3, "Configuring PCI Assignment with SR-IOV Devices" . <network> <name>ovs-net</name> <forward mode='bridge'/> <bridge name='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> <vlan trunk='yes'> <tag id='42' nativeMode='untagged'/> <tag id='47'/> </vlan> <portgroup name='dontpanic'> <vlan> <tag id='42'/> </vlan> </portgroup> </network> Figure 17.30. vSetting VLAN tag (on supported network types only) If (and only if) the network type supports vlan tagging transparent to the guest, an optional <vlan> element can specify one or more vlan tags to apply to the traffic of all guests using this network. (openvswitch and type='hostdev' SR-IOV networks do support transparent vlan tagging of guest traffic; everything else, including standard linux bridges and libvirt's own virtual networks, do not support it. 802.1Qbh (vn-link) and 802.1Qbg (VEPA) switches provide their own way (outside of libvirt) to tag guest traffic onto specific vlans.) As expected, the tag attribute specifies which vlan tag to use. If a network has more than one <vlan> element defined, it is assumed that the user wants to do VLAN trunking using all the specified tags. If vlan trunking with a single tag is required, the optional attribute trunk='yes' can be added to the vlan element. For network connections using openvswitch it is possible to configure the 'native-tagged' and 'native-untagged' vlan modes. This uses the optional nativeMode attribute on the <tag> element: nativeMode may be set to 'tagged' or 'untagged'. The id attribute of the element sets the native vlan. <vlan> elements can also be specified in a <portgroup> element, as well as directly in a domain's <interface> element. If a vlan tag is specified in multiple locations, the setting in <interface> takes precedence, followed by the setting in the <portgroup> selected by the interface config. The <vlan> in <network> will be selected only if none is given in <portgroup> or <interface> . | [
"<network> <name>ovs-net</name> <forward mode='bridge'/> <bridge name='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> <vlan trunk='yes'> <tag id='42' nativeMode='untagged'/> <tag id='47'/> </vlan> <portgroup name='dontpanic'> <vlan> <tag id='42'/> </vlan> </portgroup> </network>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-setting_vlan_tags |
Installing on bare metal | Installing on bare metal OpenShift Container Platform 4.14 Installing OpenShift Container Platform on bare metal Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_bare_metal/index |
Chapter 6. Gathering data about your cluster | Chapter 6. Gathering data about your cluster When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. It is recommended to provide: Data gathered using the oc adm must-gather command The unique cluster ID 6.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option. For example: USD oc adm must-gather --run-namespace <namespace> \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 6.1.1. Gathering data about your cluster for Red Hat Support You can gather debugging information about your cluster by using the oc adm must-gather CLI command. Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) is installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command: USD oc adm must-gather Note Because this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the NotReady and SchedulingDisabled state. If this command fails, for example, if you cannot schedule a pod on your cluster, then use the oc adm inspect command to gather information for particular resources. Note Contact Red Hat Support for the recommended resources to gather. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 6.1.2. Must-gather flags The flags listed in the following table are available to use with the oc adm must-gather command. Table 6.1. Red Hat OpenShift Service on AWS flags for oc adm must-gather Flag Example command Description --all-images oc adm must-gather --all-images=false Collect must-gather data using the default image for all Operators on the cluster that are annotated with operators.openshift.io/must-gather-image . --dest-dir oc adm must-gather --dest-dir='<directory_name>' Set a specific directory on the local machine where the gathered data is written. --host-network oc adm must-gather --host-network=false Run must-gather pods as hostNetwork: true . Relevant if a specific command and image needs to capture host-level data. --image oc adm must-gather --image=[<plugin_image>] Specify a must-gather plugin image to run. If not specified, Red Hat OpenShift Service on AWS's default must-gather image is used. --image-stream oc adm must-gather --image-stream=[<image_stream>] Specify an`<image_stream>` using a namespace or name:tag value containing a must-gather plugin image to run. --node-name oc adm must-gather --node-name='<node>' Set a specific node to use. If not specified, by default a random master is used. --node-selector oc adm must-gather --node-selector='<node_selector_name>' Set a specific node selector to use. Only relevant when specifying a command and image which needs to capture data on a set of cluster nodes simultaneously. --run-namespace oc adm must-gather --run-namespace='<namespace>' An existing privileged namespace where must-gather pods should run. If not specified, a temporary namespace is generated. --since oc adm must-gather --since=<time> Only return logs newer than the specified duration. Defaults to all logs. Plugins are encouraged but not required to support this. Only one since-time or since may be used. --since-time oc adm must-gather --since-time='<date_and_time>' Only return logs after a specific date and time, expressed in ( RFC3339 ) format. Defaults to all logs. Plugins are encouraged but not required to support this. Only one since-time or since may be used. --source-dir oc adm must-gather --source-dir='/<directory_name>/' Set the specific directory on the pod where you copy the gathered data from. --timeout oc adm must-gather --timeout='<time>' The length of time to gather data before timing out, expressed as seconds, minutes, or hours, for example, 3s, 5m, or 2h. Time specified must be higher than zero. Defaults to 10 minutes if not specified. --volume-percentage oc adm must-gather --volume-percentage=<percent> Specify maximum percentage of pod's allocated volume that can be used for must-gather . If this limit is exceeded, must-gather stops gathering, but still copies gathered data. Defaults to 30% if not specified. 6.1.3. Gathering data about specific features You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command. Table 6.2. Supported must-gather images Image Purpose registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 Data collection for OpenShift Virtualization. registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8 Data collection for OpenShift Serverless. registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:<installed_version_service_mesh> Data collection for Red Hat OpenShift Service Mesh. registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v<installed_version_migration_toolkit> Data collection for the Migration Toolkit for Containers. registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator:v<installed_version_logging> Data collection for logging. quay.io/netobserv/must-gather Data collection for the Network Observability Operator. registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v<installed_version_GitOps> Data collection for Red Hat OpenShift GitOps. registry.redhat.io/openshift4/ose-secrets-store-csi-mustgather-rhel9:v<installed_version_secret_store> Data collection for the Secrets Store CSI Driver Operator. Note To determine the latest version for an Red Hat OpenShift Service on AWS component's image, see the OpenShift Operator Life Cycles web page on the Red Hat Customer Portal. Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) is installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command with one or more --image or --image-stream arguments. Note To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 2 1 The default Red Hat OpenShift Service on AWS must-gather image 2 The must-gather image for OpenShift Virtualization You can use the must-gather tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') Example 6.1. Example must-gather output for OpenShift Logging ├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├── ... Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=quay.io/kubevirt/must-gather 2 1 The default Red Hat OpenShift Service on AWS must-gather image 2 The must-gather image for KubeVirt Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. Additional resources Red Hat OpenShift Service on AWS update life cycle 6.1.4. Gathering network logs You can gather network logs on all nodes in a cluster. Procedure Run the oc adm must-gather command with -- gather_network_logs : USD oc adm must-gather -- gather_network_logs Note By default, the must-gather tool collects the OVN nbdb and sbdb databases from all of the nodes in the cluster. Adding the -- gather_network_logs option to include additional logs that contain OVN-Kubernetes transactions for OVN nbdb database. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 6.1.5. Changing the must-gather storage limit When using the oc adm must-gather command to collect data the default maximum storage for the information is 30% of the storage capacity of the container. After the 30% limit is reached the container is killed and the gathering process stops. Information already gathered is downloaded to your local storage. To run the must-gather command again, you need either a container with more storage capacity or to adjust the maximum volume percentage. If the container reaches the storage limit, an error message similar to the following example is generated. Example output Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting... Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) is installed. Procedure Run the oc adm must-gather command with the volume-percentage flag. The new value cannot exceed 100. USD oc adm must-gather --volume-percentage <storage_percentage> 6.2. Obtaining your cluster ID When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the Red Hat OpenShift Service on AWS web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI ( oc ). Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have access to the web console or the OpenShift CLI ( oc ) installed. Procedure To manually obtain your cluster ID using OpenShift Cluster Manager : Navigate to Cluster List . Click on the name of the cluster you need to open a support case for. Find the value in the Cluster ID field of the Details section of the Overview tab. To open a support case and have your cluster ID autofilled using the web console: From the toolbar, navigate to (?) Help and select Share Feedback from the list. Click Open a support case from the Tell us about your experience window. To manually obtain your cluster ID using the web console: Navigate to Home Overview . The value is available in the Cluster ID field of the Details section. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' 6.3. Querying cluster node journal logs You can gather journald unit logs and other logs within /var/log on individual cluster nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Query kubelet journald unit logs from Red Hat OpenShift Service on AWS cluster nodes. The following example queries control plane nodes only: USD oc adm node-logs --role=master -u kubelet 1 1 Replace kubelet as appropriate to query other unit logs. Collect logs from specific subdirectories under /var/log/ on cluster nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log 6.4. Network trace methods Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues. Red Hat OpenShift Service on AWS supports two ways of performing a network trace. Review the following table and choose the method that meets your needs. Table 6.3. Supported methods of collecting a network trace Method Benefits and capabilities Collecting a host network trace You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met. You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue. Collecting a network trace from an Red Hat OpenShift Service on AWS node or container You perform a packet capture on one node or one container. You run the tcpdump command interactively, so you can control the duration of the packet capture. You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually. This method uses the cat command and shell redirection to copy the packet capture data from the node or container to the client machine. 6.5. Collecting a host network trace Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time. You can use a combination of the oc adm must-gather command and the registry.redhat.io/openshift4/network-tools-rhel8 container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues. The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes. The tcpdump command records the packet captures in the pods. When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine. Tip The sample command in the following procedure demonstrates performing a packet capture with the tcpdump command. However, you can run any command in the container image that is specified in the --image argument to gather troubleshooting information from multiple nodes at the same time. Prerequisites You are logged in to Red Hat OpenShift Service on AWS as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run a packet capture from the host network on some nodes by running the following command: USD oc adm must-gather \ --dest-dir /tmp/captures \// <.> --source-dir '/tmp/tcpdump/' \// <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \// <.> --node-selector 'node-role.kubernetes.io/worker' \// <.> --host-network=true \// <.> --timeout 30s \// <.> -- \ tcpdump -i any \// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300 <.> The --dest-dir argument specifies that oc adm must-gather stores the packet captures in directories that are relative to /tmp/captures on the client machine. You can specify any writable directory. <.> When tcpdump is run in the debug pod that oc adm must-gather starts, the --source-dir argument specifies that the packet captures are temporarily stored in the /tmp/tcpdump directory on the pod. <.> The --image argument specifies a container image that includes the tcpdump command. <.> The --node-selector argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the --node-name argument instead to run the packet capture on a single node. If you omit both the --node-selector and the --node-name argument, the packet captures are performed on all nodes. <.> The --host-network=true argument is required so that the packet captures are performed on the network interfaces of the node. <.> The --timeout argument and value specify to run the debug pod for 30 seconds. If you do not specify the --timeout argument and a duration, the debug pod runs for 10 minutes. <.> The -i any argument for the tcpdump command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets. Review the packet capture files that oc adm must-gather transferred from the pods to your client machine: tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... │ └── 2022-01-13T19:31:30.pcap ├── ip-... └── timestamp 1 2 The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the --node-selector argument, then the directory level for the hostname is not present. 6.6. Collecting a network trace from an Red Hat OpenShift Service on AWS node or container When investigating potential network-related Red Hat OpenShift Service on AWS issues, Red Hat Support might request a network packet trace from a specific Red Hat OpenShift Service on AWS cluster node or from a specific container. The recommended method to capture a network trace in Red Hat OpenShift Service on AWS is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have an existing Red Hat Support case ID. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host From within the chroot environment console, obtain the node's interface names: # ip ad Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . To avoid tcpdump issues, remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container. Initiate a tcpdump session on the cluster node and redirect output to a capture file. This example uses ens5 as the interface name: USD tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . If a tcpdump capture is required for a specific container on the node, follow these steps. Determine the target container ID. The chroot host command precedes the crictl command in this step because the toolbox container mounts the host's root directory at /host : # chroot /host crictl ps Determine the container's process ID. In this example, the container ID is a7fe32346b120 : # chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}' Initiate a tcpdump session on the container and redirect output to a capture file. This example uses 49628 as the container's process ID and ens5 as the interface name. The nsenter command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container's process ID, the tcpdump command is run in the container's namespace from the host: # nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the tcpdump capture file to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 6.7. Providing diagnostic data to Red Hat Support When investigating Red Hat OpenShift Service on AWS issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have an existing Red Hat Support case ID. Procedure Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal. Concatenate a diagnostic file contained on an Red Hat OpenShift Service on AWS node by using the oc debug node/<node_name> command and redirect the output to a file. The following example copies /host/var/tmp/my-diagnostic-data.tar.gz from a debug container to /var/tmp/my-diagnostic-data.tar.gz : USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 6.8. About toolbox toolbox is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport . The primary purpose for a toolbox container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image. Installing packages to a toolbox container By default, running the toolbox command starts a container with the registry.redhat.io/rhel9/support-tools:latest image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages. Prerequisites You have accessed a node with the oc debug node/<node_name> command. You can access your system as a user with root privileges. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Start the toolbox container: # toolbox Install the additional package, such as wget : # dnf install -y <package_name> Starting an alternative image with toolbox By default, running the toolbox command starts a container with the registry.redhat.io/rhel9/support-tools:latest image. Note You can start an alternative image by creating a .toolboxrc file and specifying the image to run. However, running an older version of the support-tools image, such as registry.redhat.io/rhel8/support-tools:latest , is not supported on Red Hat OpenShift Service on AWS 4. Prerequisites You have accessed a node with the oc debug node/<node_name> command. You can access your system as a user with root privileges. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Optional: If you need to use an alternative image instead of the default image, create a .toolboxrc file in the home directory for the root user ID, and specify the image metadata: REGISTRY=quay.io 1 IMAGE=fedora/fedora:latest 2 TOOLBOX_NAME=toolbox-fedora-latest 3 1 Optional: Specify an alternative container registry. 2 Specify an alternative image to start. 3 Optional: Specify an alternative name for the toolbox container. Start a toolbox container by entering the following command: # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . To avoid issues with sosreport plugins, remove the running toolbox container with podman rm toolbox- and then spawn a new toolbox container. | [
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 2",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather -- gather_network_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting",
"oc adm must-gather --volume-percentage <storage_percentage>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"oc adm must-gather --dest-dir /tmp/captures \\// <.> --source-dir '/tmp/tcpdump/' \\// <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\// <.> --node-selector 'node-role.kubernetes.io/worker' \\// <.> --host-network=true \\// <.> --timeout 30s \\// <.> -- tcpdump -i any \\// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300",
"tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:30.pcap ├── ip- └── timestamp",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"ip ad",
"toolbox",
"tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"chroot /host crictl ps",
"chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'",
"nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1",
"chroot /host",
"toolbox",
"dnf install -y <package_name>",
"chroot /host",
"REGISTRY=quay.io 1 IMAGE=fedora/fedora:latest 2 TOOLBOX_NAME=toolbox-fedora-latest 3",
"toolbox"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/support/gathering-cluster-data |
8.28. cluster | 8.28. cluster 8.28.1. RHBA-2014:1420 - cluster bug fix and enhancement update Updated cluster packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The Red Hat Cluster Manager is a collection of technologies working together to provide data integrity and the ability to maintain application availability in the event of a failure. Bug Fixes BZ# 843160 Previously, the fencing time comparison did not work as expected when fence agent completed too fast or the corosync callback was delayed. Consequently, the Distributed Lock Manager (DLM) became unresponsive when waiting for fencing to complete. With this update, different time stamps that are not effected by the sequence of fencing or corosync callbacks are now saved and compared, and DLM no longer hangs in the aforementioned situation. BZ# 1059269 Prior to this update, the "pcs stonith confirm <node>" command failed to acknowledge the STONITH fencing technique. As a consequence, any requests from other nodes in the cluster or from clients in the same node became ignored. A patch has been provided to fix this bug, and "pcs stonith confirm" now works as expected, fencing the specified node successfully. BZ# 1029210 Due to an error in the configuration, the qdisk daemon in some situations used an incorrect "tko" parameter for its wait period when initializing. Consequently, qdisk initialization could be significantly delayed and, under some circumstances, failed entirely. With this update, the cluster configuration file has been fixed, and qdisk initialization now proceeds as expected. BZ# 980575 Previously, the ccs_read_logging() function used the create_daemon_path() function to generate daemon-specific CCS paths for the attributes. As a consequence, attributes on individual logging_daemons were not applied correctly. This bug has been fixed, and attributes on individual logging_daemons are now applied correctly. BZ# 979313 Due to a code error in corosnync, after the corosync utility terminated unexpectedly with a segmentation fault, the qdiskd daemon evicted other cluster nodes. The underlying source code has been patched, and qdiskd no longer evicts the other nodes if corosync crashes. BZ# 1074551 Prior to this update, running the "ccs_tool -verbose" command caused ccs_tool to terminate unexpectedly with a segmentation fault. This bug has been fixed, and ccs_tool now returns an error message providing more information. BZ# 1059853 Due to an overly restrictive umask, running the "gfs2_grow" command changed the /etc/mtab file permissions from default 644 to 600. A patch has been provided to fix this bug, and gfs2_grow no longer resets /etc/mtab permissions. BZ# 1062742 Previously, fsck.gfs2 did not fix corrupt quota_change system files. As a consequence, attempts to mount the file system (FS) resulted in an error, even though fsck.gfs2 reported the FS to be clean. With this patch, if fsck.gfs2 finds a corrupted quota_change file, it can rebuild it. Now, GFS2 mounts successfully as intended. BZ# 1080174 attempts to mount a GFS2 file system that had already been mounted prevented further mount attempts from other nodes from completing. With this update, mount.gfs2 no longer leaves the mount group when the file system is already mounted, and attempts to mount an already mounted GFS2 file system are handled properly. BZ# 1053668 Prior to this update, a GFS2 volume failed to mount after conversion from GFS to GFS2, and the gfs2_convert utility aborted with a segmentation fault. The gfs2-utils code has been patched to fix this bug, and the aforementioned conversions now proceed successfully. In addition, this update adds the following Enhancement BZ# 1081517 To aid debugging and administration, fsck.gfs2 now logs a message to the system log when it starts and ends. Users of cluster are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/cluster |
Appendix B. Metadata Server daemon configuration Reference | Appendix B. Metadata Server daemon configuration Reference Refer the list commands that can be used for Metadata Server (MDS) daemon configuration. mon_force_standby_active Description If set to true , monitors force MDS in standby replay mode to be active. Set under the [mon] or [global] section in the Ceph configuration file. Type Boolean Default true max_mds Description The number of active MDS daemons during cluster creation. Set under the [mon] or [global] section in the Ceph configuration file. Type 32-bit Integer Default 1 mds_cache_memory_limit Description The memory limit the MDS enforces for its cache. Red Hat recommends to use this parameter instead of the mds cache size parameter. Type 64-bit Integer Unsigned Default 1073741824 mds_cache_reservation Description The cache reservation, memory or inodes, for the MDS cache to maintain. The value is a percentage of the maximum cache configured. Once the MDS begins dipping into its reservation, it recalls client state until its cache size shrinks to restore the reservation. Type Float Default 0.05 mds_cache_size Description The number of inodes to cache. A value of 0 indicates an unlimited number. Red Hat recommends to use the mds_cache_memory_limit to limit the amount of memory the MDS cache uses. Type 32-bit Integer Default 0 mds_cache_mid Description The insertion point for new items in the cache LRU, from the top. Type Float Default 0.7 mds_dir_commit_ratio Description The fraction of directory contains erroneous information before Ceph commits using a full update, instead of partial update. Type Float Default 0.5 mds_dir_max_commit_size Description The maximum size of a directory update before Ceph breaks the directory into smaller transactions, in MB. Type 32-bit Integer Default 90 mds_decay_halflife Description The half-life of MDS cache temperature. Type Float Default 5 mds_beacon_interval Description The frequency, in seconds, of beacon messages sent to the monitor. Type Float Default 4 mds_beacon_grace Description The interval without beacons before Ceph declares an MDS laggy , and possibly replace it. Type Float Default 15 mds_blacklist_interval Description The blacklist duration for failed MDS daemons in the OSD map. Type Float Default 24.0*60.0 mds_session_timeout Description The interval, in seconds, of client inactivity before Ceph times out capabilities and leases. Type Float Default 60 mds_session_autoclose Description The interval, in seconds, before Ceph closes a laggy client's session. Type Float Default 300 mds_reconnect_timeout Description The interval, in seconds, to wait for clients to reconnect during MDS restart. Type Float Default 45 mds_tick_interval Description How frequently the MDS performs internal periodic tasks. Type Float Default 5 mds_dirstat_min_interval Description The minimum interval, in seconds, to try to avoid propagating recursive statistics up the tree. Type Float Default 1 mds_scatter_nudge_interval Description How quickly changes in directory statistics propagate up. Type Float Default 5 mds_client_prealloc_inos Description The number of inode numbers to preallocate per client session. Type 32-bit Integer Default 1000 mds_early_reply Description Determines whether the MDS allows clients to see request results before they commit to the journal. Type Boolean Default true mds_use_tmap Description Use trivialmap for directory updates. Type Boolean Default true mds_default_dir_hash Description The function to use for hashing files across directory fragments. Type 32-bit Integer Default 2 ,that is, rjenkins mds_log Description Set to true if the MDS should journal metadata updates. Disable for benchmarking only. Type Boolean Default true mds_log_skip_corrupt_events Description Determines whether the MDS tries to skip corrupt journal events during journal replay. Type Boolean Default false mds_log_max_events Description The maximum events in the journal before Ceph initiates trimming. Set to -1 to disable limits. Type 32-bit Integer Default -1 mds_log_max_segments Description The maximum number of segments or objects, in the journal before Ceph initiates trimming. Set to -1 to disable limits. Type 32-bit Integer Default 30 mds_log_max_expiring Description The maximum number of segments to expire in parallels. Type 32-bit Integer Default 20 mds_log_eopen_size Description The maximum number of inodes in an EOpen event. Type 32-bit Integer Default 100 mds_bal_sample_interval Description Determines how frequently to sample directory temperature, when making fragmentation decisions. Type Float Default 3 mds_bal_replicate_threshold Description The maximum temperature before Ceph attempts to replicate metadata to other nodes. Type Float Default 8000 mds_bal_unreplicate_threshold Description The minimum temperature before Ceph stops replicating metadata to other nodes. Type Float Default 0 mds_bal_frag Description Determines whether the MDS fragments directories. Type Boolean Default false mds_bal_split_size Description The maximum directory size before the MDS splits a directory fragment into smaller bits. The root directory has a default fragment size limit of 10000. Type 32-bit Integer Default 10000 mds_bal_split_rd Description The maximum directory read temperature before Ceph splits a directory fragment. Type Float Default 25000 mds_bal_split_wr Description The maximum directory write temperature before Ceph splits a directory fragment. Type Float Default 10000 mds_bal_split_bits Description The number of bits by which to split a directory fragment. Type 32-bit Integer Default 3 mds_bal_merge_size Description The minimum directory size before Ceph tries to merge adjacent directory fragments. Type 32-bit Integer Default 50 mds_bal_merge_rd Description The minimum read temperature before Ceph merges adjacent directory fragments. Type Float Default 1000 mds_bal_merge_wr Description The minimum write temperature before Ceph merges adjacent directory fragments. Type Float Default 1000 mds_bal_interval Description The frequency, in seconds, of workload exchanges between MDS nodes. Type 32-bit Integer Default 10 mds_bal_fragment_interval Description The frequency, in seconds, of adjusting directory fragmentation. Type 32-bit Integer Default 5 mds_bal_idle_threshold Description The minimum temperature before Ceph migrates a subtree back to its parent. Type Float Default 0 mds_bal_max Description The number of iterations to run balancer before Ceph stops. For testing purposes only. Type 32-bit Integer Default -1 mds_bal_max_until Description The number of seconds to run balancer before Ceph stops. For testing purposes only. Type 32-bit Integer Default -1 mds_bal_mode Description The method for calculating MDS load: 1 = Hybrid. 2 = Request rate and latency. 3 = CPU load. Type 32-bit Integer Default 0 mds_bal_min_rebalance Description The minimum subtree temperature before Ceph migrates. Type Float Default 0.1 mds_bal_min_start Description The minimum subtree temperature before Ceph searches a subtree. Type Float Default 0.2 mds_bal_need_min Description The minimum fraction of target subtree size to accept. Type Float Default 0.8 mds_bal_need_max Description The maximum fraction of target subtree size to accept. Type Float Default 1.2 mds_bal_midchunk Description Ceph migrates any subtree that is larger than this fraction of the target subtree size. Type Float Default 0.3 mds_bal_minchunk Description Ceph ignores any subtree that is smaller than this fraction of the target subtree size. Type Float Default 0.001 mds_bal_target_removal_min Description The minimum number of balancer iterations before Ceph removes an old MDS target from the MDS map. Type 32-bit Integer Default 5 mds_bal_target_removal_max Description The maximum number of balancer iterations before Ceph removes an old MDS target from the MDS map. Type 32-bit Integer Default 10 mds_replay_interval Description The journal poll interval when in standby-replay mode for a hot standby . Type Float Default 1 mds_shutdown_check Description The interval for polling the cache during MDS shutdown. Type 32-bit Integer Default 0 mds_thrash_exports Description Ceph randomly exports subtrees between nodes. For testing purposes only. Type 32-bit Integer Default 0 mds_thrash_fragments Description Ceph randomly fragments or merges directories. Type 32-bit Integer Default 0 mds_dump_cache_on_map Description Ceph dumps the MDS cache contents to a file on each MDS map. Type Boolean Default false mds_dump_cache_after_rejoin Description Ceph dumps MDS cache contents to a file after rejoining the cache during recovery. Type Boolean Default false mds_verify_scatter Description Ceph asserts that various scatter/gather invariants are true . For developer use only. Type Boolean Default false mds_debug_scatterstat Description Ceph asserts that various recursive statistics invariants are true . For developer use only. Type Boolean Default false mds_debug_frag Description Ceph verifies directory fragmentation invariants when convenient. For developer use only. Type Boolean Default false mds_debug_auth_pins Description The debug authentication pin invariants. For developer use only. Type Boolean Default false mds_debug_subtrees Description Debugging subtree invariants. For developer use only. Type Boolean Default false mds_kill_mdstable_at Description Ceph injects MDS failure in MDS Table code. For developer use only. Type 32-bit Integer Default 0 mds_kill_export_at Description Ceph injects MDS failure in the subtree export code. For developer use only. Type 32-bit Integer Default 0 mds_kill_import_at Description Ceph injects MDS failure in the subtree import code. For developer use only. Type 32-bit Integer Default 0 mds_kill_link_at Description Ceph injects MDS failure in hard link code. For developer use only. Type 32-bit Integer Default 0 mds_kill_rename_at Description Ceph injects MDS failure in the rename code. For developer use only. Type 32-bit Integer Default 0 mds_wipe_sessions Description Ceph deletes all client sessions on startup. For testing purposes only. Type Boolean Default 0 mds_wipe_ino_prealloc Description Ceph deletea inode preallocation metadata on startup. For testing purposes only. Type Boolean Default 0 mds_skip_ino Description The number of inode numbers to skip on startup. For testing purposes only. Type 32-bit Integer Default 0 mds_standby_for_name Description The MDS daemon is a standby for another MDS daemon of the name specified in this setting. Type String Default N/A mds_standby_for_rank Description An instance of the MDS daemon is a standby for another MDS daemon instance of this rank. Type 32-bit Integer Default -1 mds_standby_replay Description Determines whether the MDS daemon polls and replays the log of an active MDS when used as a hot standby . Type Boolean Default false | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/metadata-server-daemon-configuration-reference_fs |
7.190. scap-security-guide | 7.190. scap-security-guide 7.190.1. RHBA-2015:1334 - scap-security-guide bug fix and enhancement update Updated scap-security-guide package that fixes several bugs and adds various enhancements are now available for Red Hat Enterprise Linux 6. The scap-security-guide package provides the security guidance, baselines, and associated validation mechanisms that use Security Content Automation Protocol (SCAP). SCAP Security Guide contains the necessary data to perform system security compliance scans regarding prescribed security policy requirements; both a written description and an automated test (probe) are included. By automating the testing, SCAP Security Guide provides a convenient and reliable way to verify system compliance on a regular basis. Bug Fixes BZ# 1133963 The SCAP content for Red Hat Enterprise Linux 6 Server is now shipped also in the datastream output format. * The SCAP content for Red Hat Enterprise Linux 7 Server has been included in order to enable the possibility to perform remote scans of Red Hat Enterprise Linux 7 Server systems from Red Hat Enterprise Linux 6 systems. * This update also includes the United States Government Configuration Baseline (USGCB) profile kickstart file for a new installation of USGCB-compliant Red Hat Enterprise Linux 6 Server system. Refer to Red Hat Enterprise Linux 6 Security Guide for further details. BZ# 1183034 Previously, when checking the sysctl kernel parameters configuration, the SCAP content recognized only the settings present in the /etc/sysctl.conf file. With this update, the content has been updated to also recognize the sysctl utility settings from additional configuration files located in the /etc/sysctl.d/ directory. BZ# 1185426 Prior to this update, when performing a validation if the removable media block special devices were configured with the "nodev", "noexec", or "nosuid" options, the content could incorrectly report shared memory (/dev/shm) device as the one missing the required setting. With this update, the corresponding Open Vulnerability and Assessment Language (OVAL) checks have been corrected to verify mount options settings only for removable media block special devices. BZ# 1191409 Due to a bug in the OVAL check validation, if the listening capability of the postfix service was disabled, the system property scan returned a failure even if the postfix package was not installed on the system. This bug has been corrected and the feature of the postfix service is now reported as disabled. Also, the underlying scan result returns "PASS" when the postfix package is not installed on the system. BZ# 1199946 An earlier version of the scap-security-guide package included also an Extensible Configuration Checklist Document Format (XCCDF) profile named "test". Since the purpose of this profile is just to check basic sanity of the corresponding SCAP content and it is not intended to be applied for actual system scan, the "test" profile has now been removed. Users of scap-security-guide are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-scap-security-guide |
AMQ Streams on OpenShift Overview | AMQ Streams on OpenShift Overview Red Hat AMQ 2020.Q4 For use with AMQ Streams 1.6 on OpenShift Container Platform | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_streams_on_openshift_overview/index |
Chapter 5. Important changes to external kernel parameters | Chapter 5. Important changes to external kernel parameters This chapter provides system administrators with a summary of significant changes in the kernel distributed with Red Hat Enterprise Linux 9.3. These changes could include, for example, added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters amd_pstate=[X86] With this kernel parameter, you can scale the performance of the AMD CPU. Available values include: disable - Do not enable amd_pstate as the default scaling driver for the supported processors. passive - Use amd_pstate with passive mode as a scaling driver. In this mode autonomous selection is disabled. Driver requests a required performance level and platform tries to match the same performance level if it is satisfied by guaranteed performance level. active - Use amd_pstate_epp driver instance as the scaling driver, driver provides a hint to the hardware if software wants to bias toward performance (0x0) or energy efficiency (0xff) to the CPPC firmware. Then CPPC power algorithm will calculate the runtime workload and adjust the realtime cores frequency. guided - Activate guided autonomous mode. Driver requests minimum and maximum performance level and the platform autonomously selects a performance level in this range and appropriate to the current workload. arm64.nosve=[ARM64] With this kernel parameter, you can unconditionally disable Scalable Vector Extension support. arm64.nosme=[ARM64] With this kernel parameter, you can unconditionally disable Scalable Matrix Extension support. gather_data_sampling=[X86,INTEL] With this kernel parameter, you can control the Gather Data Sampling (GDS) mitigation. GDS is a hardware vulnerability that allows unprivileged speculative access to data that was previously stored in vector registers. This issue is mitigated by default in updated microcode. The mitigation might have a performance impact but can be disabled. On systems without the microcode mitigation disabling AVX serves as a mitigation. Available values include: force - Disable AVX to mitigate systems without microcode mitigation. No effect if the microcode mitigation is present. Known to cause crashes in userspace with buggy AVX enumeration. off - Disable GDS mitigation. nospectre_bhb=[ARM64] With this kernel parameter, you can disable all mitigations for Spectre-BHB (branch history injection) vulnerability. System might allow data leaks with this option. trace_clock=[FTRACE] With this kernel parameter, you can set the clock used for tracing events at boot up. Available values include: local - Use the per CPU timestamp counter. global - Event timestamps are synchronize across CPUs. Might be slower than the local clock, but better for some race conditions. counter - Simple counting of events (1, 2, ..) note, some counts might be skipped due to the infrastructure grabbing the clock more than once per event. uptime - Use jiffies as the timestamp. perf - Use the same clock that perf uses. mono - Use the ktime_get_mono_fast_ns() function for timestamps. mono_raw - Use the ktime_get_raw_fast_ns() function for timestamps. boot - Use the ktime_get_boot_fast_ns() function for timestamps. Architectures might add more clocks, see Documentation/trace/ftrace.rst for more details. Updated kernel parameters cgroup.memory=[KNL] With this kernel parameter, you can pass options to the cgroup memory controller. This parameter takes the format of: <string> Available values include: nosocket - Disable socket memory accounting. nokmem - Disable kernel memory accounting. [NEW] nobpf - Disable BPF memory accounting. hugetlb_free_vmemmap=[KNL] This kernel parameter enables the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. For this parameter to work, the CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP configuration option must be enabled. This parameter takes the format of: { on | off (default) } Available values include: on - enables this feature off - disables this feature Note The vmemmap pages might be allocated from the added memory block itself when the memory_hotplug.memmap_on_memory module parameter is enabled. Those vmemmap pages cannot be optimized even if this feature is enabled. Other vmemmap pages not allocated from the added memory block itself are not affected. intel_pstate=[X86] You can use this kernel parameter for CPU performance scaling. Available values include: disable - Do not enable intel_pstate as the default scaling driver for the supported processors. [NEW] active - Use intel_pstate driver to bypass the scaling governors layer of cpufreq and provides it own algorithms for p-state selection. There are two P-state selection algorithms provided by intel_pstate in the active mode: powersave and performance. The way they both operate depends on whether or not the hardware managed P-states (HWP) feature has been enabled in the processor and possibly on the processor model. passive - Use intel_pstate as a scaling driver, but configure it to work with generic cpufreq governors (instead of enabling its internal governor). This mode cannot be used along with the hardware-managed P-states (HWP) feature. force - Enable intel_pstate on systems that prohibit it by default in favor of acpi-cpufreq . Forcing the intel_pstate driver instead of acpi-cpufreq might disable platform features, such as thermal controls and power capping, that rely on ACPI P-States information being indicated to OSPM and therefore should be used with caution. This option does not work with processors that are not supported by the intel_pstate driver or on platforms that use pcc-cpufreq instead of acpi-cpufreq . no_hwp - Do not enable hardware P state control (HWP) if available. hwp_only - Only load intel_pstate on systems that support hardware P state control (HWP) if available. support_acpi_ppc - Enforce ACPI _PPC performance limits. If the Fixed ACPI Description Table specifies preferred power management profile as "Enterprise Server" or "Performance Server", then this feature is turned on by default. per_cpu_perf_limits - Allow per-logical-CPU P-State performance control limits using the cpufreq sysfs interface. kvm-arm.mode=[KVM,ARM] With this kernel parameter, you can select one of KVM/arm64's modes of operation. Available values include: none - Forcefully disable KVM. nvhe - Standard nVHE-based mode, without support for protected guests. protected - nVHE-based mode with support for guests whose state is kept private from the host. Setting mode to protected disables kexec and hibernation for the host. [NEW] nested - VHE-based mode with support for nested virtualization. Requires at least ARMv8.3 hardware. The nested option is experimental and should be used with extreme caution. Defaults to VHE/nVHE based on hardware support. libata.force=[LIBATA] With this kernel parameter, you can force configurations. The format is a comma-separated list of "[ID:]VAL" where ID is PORT[.DEVICE]. PORT and DEVICE are decimal numbers matching port, link or device. Basically, it matches the ATA ID string printed on console by libata . If the whole ID part is omitted, the last PORT and DEVICE values are used. If ID has not been specified yet, the configuration applies to all ports, links and devices. If only the DEVICE value is omitted, the parameter applies to the port and all links and devices behind it. DEVICE number of 0 either selects the first device or the first fan-out link behind PMP device. It does not select the host link. DEVICE number of 15 selects the host link and device attached to it. The VAL specifies the configuration to force. As long as there is no ambiguity, shortcut notation is allowed. For example, both 1.5 and 1.5G would work for 1.5Gbps. With the libata.force= parameter, you can force the following configurations: Cable type: 40c, 80c, short40c, unk, ign or sata. Any ID with matching PORT is used. SATA link speed limit: 1.5Gbps or 3.0Gbps. Transfer mode: pio[0-7], mwdma[0-4] and udma[0-7]. udma[/][16,25,33,44,66,100,133] notation is also allowed. nohrst , nosrst , norst : suppress hard, soft and both resets. rstonce : only attempt one reset during hot-unplug link recovery. [NEW] [no]dbdelay : Enable or disable the extra 200ms delay before debouncing a link PHY and device presence detection. [no]ncq : Turn on or off NCQ. [no]ncqtrim : Enable or disable queued DSM TRIM. [NEW] [no]ncqati : Enable or disable NCQ trim on ATI chipset. [NEW] [no]trim : Enable or disable (unqueued) TRIM. [NEW] trim_zero : Indicate that TRIM command zeroes data. [NEW] max_trim_128m : Set 128M maximum trim size limit. [NEW] [no]dma : Turn on or off DMA transfers. atapi_dmadir : Enable ATAPI DMADIR bridge support. atapi_mod16_dma : Enable the use of ATAPI DMA for commands that are not a multiple of 16 bytes. [no]dmalog : Enable or disable the use of the READ LOG DMA EXT command to access logs. [no]iddevlog : Enable or disable access to the identify device data log. [no]logdir : Enable or disable access to the general purpose log directory. [NEW] max_sec_128 : Set transfer size limit to 128 sectors. [NEW] max_sec_1024 : Set or clear transfer size limit to 1024 sectors. [NEW] max_sec_lba48 : Set or clear transfer size limit to 65535 sectors. [NEW] [no]lpm : Enable or disable link power management. [NEW] [no]setxfer : Indicate if transfer speed mode setting should be skipped. [NEW] [no]fua : Disable or enable FUA (Force Unit Access) support for devices supporting this feature. dump_id : Dump IDENTIFY data. disable : Disable this device. Note If there are multiple matching configurations changing the same attribute, the last one is used. mitigations=[X86,PPC,S390,ARM64] With this kernel parameter, you can control optional mitigations for CPU vulnerabilities. This is a set of curated, arch-independent options, each of which is an aggregation of existing arch-specific options. Available values include: off - disable all optional CPU mitigations. This improves system performance, but it can also expose users to several CPU vulnerabilities. The off value is equivalent to: if nokaslr then kpti=0 [ARM64] gather_data_sampling=off [X86] kvm.nx_huge_pages=off [X86] l1tf=off [X86] mds=off [X86] mmio_stale_data=off [X86] no_entry_flush [PPC] no_uaccess_flush [PPC] nobp=0 [S390] nopti [X86,PPC] nospectre_bhb [ARM64] nospectre_v1 [X86,PPC] nospectre_v2 [X86,PPC,S390,ARM64] retbleed=off [X86] spec_store_bypass_disable=off [X86,PPC] spectre_v2_user=off [X86] srbds=off [X86,INTEL] ssbd=force-off [ARM64] tsx_async_abort=off [X86] Exceptions: This does not have any effect on kvm.nx_huge_pages when kvm.nx_huge_pages=force . auto (default) - Mitigate all CPU vulnerabilities, but leave SMT enabled, even if it is vulnerable. This is for users who do not want to be surprised by SMT getting disabled across kernel upgrades, or who have other ways of avoiding SMT-based attacks. auto , nosmt - Mitigate all CPU vulnerabilities, disabling SMT if needed. This is for users who always want to be fully mitigated, even if it means losing SMT. The auto , nosmt options are equivalent to: l1tf=flush,nosmt [X86] mds=full,nosmt [X86] tsx_async_abort=full,nosmt [X86] mmio_stale_data=full,nosmt [X86] retbleed=auto,nosmt [X86] nomodeset With this kernel parameter, you can disable kernel modesetting. Most systems' firmware sets up a display mode and provides framebuffer memory for output. With nomodeset , DRM and fbdev drivers will not load if they could possibly displace the preinitialized output. Only the system framebuffer will be available for use. The drivers will not perform display-mode changes or accelerated rendering. This parameter is especially useful as error fallback, or for testing and debugging. rdt=[HW,X86,RDT] With this kernel parameter, you can turn on or off individual RDT features. The list includes: cmt , mbmtotal , mbmlocal , l3cat , l3cdp , l2cat , l2cdp , mba , smba , bmec . For example, to turn on cmt and turn off mba use: rodata=[KNL] With this kernel parameter, you can disable read-only kernel mappings. Available options include: on - Mark read-only kernel memory as read-only (default). off - Leave read-only kernel memory writable for debugging. [NEW] full - Mark read-only kernel memory and aliases as read-only [arm64]. Removed kernel parameters nobats=[PPC] With this kernel parameter, you can forbid the use of BATs for mapping kernel lowmem on "Classic" PPC cores. noltlbs=[PPC] With this kernel parameter, you can forbid the use of huge page and tlb entries for kernel lowmem mapping on PPC40x and PPC8xx. swapaccount=[0|1]=[KNL] With this kernel parameter, you can enable or disable accounting of swap in memory resource controller. For more information, see Documentation/admin-guide/cgroup-v1/memory.rst . | [
"rdt=cmt,!mba"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.3_release_notes/kernel_parameters_changes |
Chapter 9. Supported Configurations | Chapter 9. Supported Configurations Supported configurations for the AMQ Streams 2.5 release. 9.1. Supported platforms The following platforms are tested for AMQ Streams 2.5 running with Kafka on the version of Red Hat Enterprise Linux (RHEL) stated. Operating System Architecture JVM RHEL 7 x86, amd64 Java 11 RHEL 8 and 9 x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM(R) LinuxONE), aarch64 (64-bit ARM) Java 11 and Java 17 Platforms are tested with Open JDK 11 and 17. The IBM JDK is supported but not regularly tested against during each release. Open JDK 8, Oracle JDK 8 & 11, and IBM JDK 8 are not supported. Note Support for aarch64 (64-bit ARM) applies to AMQ Streams 2.5 when running Kafka 3.5.0 only. 9.2. Supported Apache Kafka ecosystem In AMQ Streams, only the following components released directly from the Apache Software Foundation are supported: Apache Kafka Broker Apache Kafka Connect Apache MirrorMaker Apache MirrorMaker 2 Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams Apache ZooKeeper Note Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes. Additionally, the cores or vCPU allocated to ZooKeeper nodes are not included in subscription compliance calculations. In other words, ZooKeeper nodes do not count towards a customer's subscription. 9.3. Additional supported features Kafka Bridge Drain Cleaner Cruise Control Distributed Tracing See also, Chapter 11, Supported integration with Red Hat products . 9.4. Storage requirements Kafka requires block storage; file storage options like NFS are not compatible. Additional resources For information on the supported configurations for the AMQ Streams 2.2 LTS release, see the AMQ Streams Supported Configurations article on the customer portal. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_rhel/ref-supported-configurations-str |
Chapter 1. User and group APIs | Chapter 1. User and group APIs 1.1. Group [user.openshift.io/v1] Description Group represents a referenceable set of Users Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. Identity [user.openshift.io/v1] Description Identity records a successful authentication of a user with an identity provider. The information about the source of authentication is stored on the identity, and the identity is then associated with a single user object. Multiple identities can reference a single user. Information retrieved from the authentication provider is stored in the extra field using a schema determined by the provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. UserIdentityMapping [user.openshift.io/v1] Description UserIdentityMapping maps a user to an identity Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. User [user.openshift.io/v1] Description Upon log in, every user of the system receives a User and Identity resource. Administrators may directly manipulate the attributes of the users for their own tracking, or set groups via the API. The user name is unique and is chosen based on the value provided by the identity provider - if a user already exists with the incoming name, the user name may have a number appended to it depending on the configuration of the system. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/user_and_group_apis/user-and-group-apis |
Chapter 3. Differences between OpenShift Container Platform 3 and 4 | Chapter 3. Differences between OpenShift Container Platform 3 and 4 OpenShift Container Platform 4.15 introduces architectural changes and enhancements/ The procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply to OpenShift Container Platform 4. For information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.15 release notes . It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. 3.1. Architecture With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates. OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, machine sets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling. Beginning with OpenShift Container Platform 4.13, RHCOS now uses Red Hat Enterprise Linux (RHEL) 9.2 packages. This enhancement enables the latest fixes and features as well as the latest hardware support and driver updates. For more information about how this upgrade to RHEL 9.2 might affect your options configuration and services as well as driver and container support, see the RHCOS now uses RHEL 9.2 in the OpenShift Container Platform 4.13 release notes . For more information, see OpenShift Container Platform architecture . Immutable infrastructure OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform. In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3. For more information, see Red Hat Enterprise Linux CoreOS (RHCOS) . Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically. For more information, see Understanding Operators . 3.2. Installation and upgrade Installation process To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster. In OpenShift Container Platform 4.15, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster. For more information, see Installation process . If you want to add Red Hat Enterprise Linux (RHEL) worker machines to your OpenShift Container Platform 4.15 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster . Infrastructure options In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains. For more information, see OpenShift Container Platform installation overview . Upgrading your cluster In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.15, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.15 cluster has RHEL worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines. For more information, see Updating clusters . 3.3. Migration considerations Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4. 3.3.1. Storage considerations Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.15. Local volume persistent storage Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.15. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11. For more information, see Persistent storage using local volumes . FlexVolume persistent storage The FlexVolume plugin location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.15 is /etc/kubernetes/kubelet-plugins/volume/exec . Attachable FlexVolume plugins are no longer supported. For more information, see Persistent storage using FlexVolume . Container Storage Interface (CSI) persistent storage Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. OpenShift Container Platform 4.15 ships with several CSI drivers . You can also install your own driver. For more information, see Persistent storage using the Container Storage Interface (CSI) . Red Hat OpenShift Data Foundation OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage. Red Hat OpenShift Data Foundation 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage. For more information, see Persistent storage using Red Hat OpenShift Data Foundation and the interoperability matrix article. Unsupported persistent storage options Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.15: GlusterFS is no longer supported. CephFS as a standalone product is no longer supported. Ceph RBD as a standalone product is no longer supported. If you used one of these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.15. For more information, see Understanding persistent storage . Migration of in-tree volumes to CSI drivers OpenShift Container Platform 4 is migrating in-tree volume plugins to their Container Storage Interface (CSI) counterparts. In OpenShift Container Platform 4.15, CSI drivers are the new default for the following in-tree volume types: Amazon Web Services (AWS) Elastic Block Storage (EBS) Azure Disk Azure File Google Cloud Platform Persistent Disk (GCP PD) OpenStack Cinder VMware vSphere Note As of OpenShift Container Platform 4.13, VMware vSphere is not available by default. However, you can opt into VMware vSphere. All aspects of volume lifecycle, such as creation, deletion, mounting, and unmounting, is handled by the CSI driver. For more information, see CSI automatic migration . 3.3.2. Networking considerations Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.15. Network isolation mode The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet , though users frequently switched to use ovn-multitenant . The default network isolation mode for OpenShift Container Platform 4.15 is controlled by a network policy. If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your OpenShift Container Platform 4.15 cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in OpenShift Container Platform 4.15, follow the steps to configure multitenant isolation using network policy . For more information, see About network policy . OVN-Kubernetes as the default networking plugin in Red Hat OpenShift Networking In OpenShift Container Platform 3.11, OpenShift SDN was the default networking plugin in Red Hat OpenShift Networking. In OpenShift Container Platform 4.15, OVN-Kubernetes is now the default networking plugin. For information on migrating to OVN-Kubernetes from OpenShift SDN, see Migrating from the OpenShift SDN network plugin . 3.3.3. Logging considerations Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.15. Deploying OpenShift Logging OpenShift Container Platform 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. For more information, see Installing OpenShift Logging . Aggregated logging data You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster. For more information, see About OpenShift Logging . Unsupported logging configurations Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.15. For more information on the explicitly unsupported logging cases, see the logging support documentation . 3.3.4. Security considerations Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.15. Unauthenticated access to discovery endpoints In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/* ). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.15. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network. Identity providers Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes: The request header identity provider in OpenShift Container Platform 4.15 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not. The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.15. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider's /.well-known/openid-configuration endpoint. For more information, see Understanding identity provider configuration . OAuth token storage format Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information. Default security context constraints The restricted security context constraints (SCC) in OpenShift Container Platform 4 can no longer be accessed by any authenticated user as the restricted SCC in OpenShift Container Platform 3.11. The broad authenticated access is now granted to the restricted-v2 SCC, which is more restrictive than the old restricted SCC. The restricted SCC still exists; users that want to use it must be specifically given permissions to do it. For more information, see Managing security context constraints . 3.3.5. Monitoring considerations Review the following monitoring changes when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.15. You cannot migrate Hawkular configurations and metrics to Prometheus. Alert for monitoring infrastructure availability The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4. For more information, see Configuring alert routing for default platform alerts . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/migrating_from_version_3_to_4/planning-migration-3-4 |
Chapter 8. Troubleshooting Ceph objects | Chapter 8. Troubleshooting Ceph objects As a storage administrator, you can use the ceph-objectstore-tool utility to perform high-level or low-level object operations. The ceph-objectstore-tool utility can help you troubleshoot problems related to objects within a particular OSD or placement group. Important Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool utility. Prerequisites Verify there are no network-related issues. 8.1. Troubleshooting high-level object operations As a storage administrator, you can use the ceph-objectstore-tool utility to perform high-level object operations. The ceph-objectstore-tool utility supports the following high-level object operations: List objects List lost objects Fix lost objects Important Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool utility. Prerequisites Root-level access to the Ceph OSD nodes. 8.1.1. Listing objects The OSD can contain zero to many placement groups, and zero to many objects within a placement group (PG). The ceph-objectstore-tool utility allows you to list objects stored within an OSD. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Syntax Example Log in to the OSD container: Syntax Example Identify all the objects within an OSD, regardless of their placement group: Syntax Example Identify all the objects within a placement group: Syntax Example Identify the PG an object belongs to: Syntax Example 8.1.2. Fixing lost objects You can use the ceph-objectstore-tool utility to list and fix lost and unfound objects stored within a Ceph OSD. This procedure applies only to legacy objects. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Syntax Example Log in to the OSD container: Syntax Example To list all the lost legacy objects: Syntax Example Use the ceph-objectstore-tool utility to fix lost and unfound objects. Select the appropriate circumstance: To fix all lost objects: Syntax Example To fix all the lost objects within a placement group: Syntax Example To fix a lost object by its identifier: Syntax Example 8.2. Troubleshooting low-level object operations As a storage administrator, you can use the ceph-objectstore-tool utility to perform low-level object operations. The ceph-objectstore-tool utility supports the following low-level object operations: Manipulate the object's content Remove an object List the object map (OMAP) Manipulate the OMAP header Manipulate the OMAP key List the object's attributes Manipulate the object's attribute key Important Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool utility. Prerequisites Root-level access to the Ceph OSD nodes. 8.2.1. Manipulating the object's content With the ceph-objectstore-tool utility, you can get or set bytes on an object. Important Setting the bytes on an object can cause unrecoverable data loss. To prevent data loss, make a backup copy of the object. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Syntax Example Find the object by listing the objects of the OSD or placement group (PG). Log in to the OSD container: Syntax Example Before setting the bytes on an object, make a backup and a working copy of the object: Syntax Example Edit the working copy object file and modify the object contents accordingly. Set the bytes of the object: Syntax Example 8.2.2. Removing an object Use the ceph-objectstore-tool utility to remove an object. By removing an object, its contents and references are removed from the placement group (PG). Important You cannot recreate an object once it is removed. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Log in to the OSD container: Syntax Example Remove an object: Syntax Example 8.2.3. Listing the object map Use the ceph-objectstore-tool utility to list the contents of the object map (OMAP). The output provides you a list of keys. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Syntax Example Log in to the OSD container: Syntax Example List the object map: Syntax Example 8.2.4. Manipulating the object map header The ceph-objectstore-tool utility outputs the object map (OMAP) header with the values associated with the object's keys. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Syntax Example Log in to the OSD container: Syntax Example Get the object map header: Syntax Example Set the object map header: Syntax Example 8.2.5. Manipulating the object map key Use the ceph-objectstore-tool utility to change the object map (OMAP) key. You need to provide the data path, the placement group identifier (PG ID), the object, and the key in the OMAP. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Log in to the OSD container: Syntax Example Get the object map key: Syntax Example Set the object map key: Syntax Example Remove the object map key: Syntax Example 8.2.6. Listing the object's attributes Use the ceph-objectstore-tool utility to list an object's attributes. The output provides you with the object's keys and values. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Syntax Example Log in to the OSD container: Syntax Example List the object's attributes: Syntax Example 8.2.7. Manipulating the object attribute key Use the ceph-objectstore-tool utility to change an object's attributes. To manipulate the object's attributes you need the data paths, the placement group identifier (PG ID), the object, and the key in the object's attribute. Prerequisites Root-level access to the Ceph OSD node. Stop the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Syntax Example Log in to the OSD container: Syntax Example Get the object's attributes: Syntax Example Set an object's attributes: Syntax Example Remove an object's attributes: Syntax Example Additional Resources For Red Hat Ceph Storage support, see the Red Hat Customer Portal . | [
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op list",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op list",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list OBJECT_ID",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list default.region",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost --dry-run",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost --dry-run",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op fix-lost",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op fix-lost",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost OBJECT_ID",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost default.region",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-bytes > OBJECT_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-bytes > zone_info.default.backup ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-bytes > zone_info.default.working-copy",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-bytes < OBJECT_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-bytes < zone_info.default.working-copy",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT remove",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' remove",
"systemctl status ceph-osd@ OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT list-omap",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' list-omap",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omaphdr > OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-omaphdr > zone_info.default.omaphdr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omaphdr < OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-omaphdr < zone_info.default.omaphdr.txt",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omap KEY > OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-omap \"\" > zone_info.default.omap.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-omap KEY < OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-omap \"\" < zone_info.default.omap.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT rm-omap KEY",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' rm-omap \"\"",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT list-attrs",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' list-attrs",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-attr KEY > OBJECT_ATTRS_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-attr \"oid\" > zone_info.default.attr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-attr KEY < OBJECT_ATTRS_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-attr \"oid\"<zone_info.default.attr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT rm-attr KEY",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' rm-attr \"oid\""
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/troubleshooting-ceph-objects |
Chapter 1. Prerequisites | Chapter 1. Prerequisites You can use installer-provisioned installation to install OpenShift Container Platform on {ibmcloudBMRegProductName} nodes. This document describes the prerequisites and procedures when installing OpenShift Container Platform on IBM Cloud nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. A provisioning network is required. Installer-provisioned installation of OpenShift Container Platform requires: One node with Red Hat Enterprise Linux CoreOS (RHCOS) 8.x installed, for running the provisioner Three control plane nodes One routable network One provisioning network Before starting an installer-provisioned installation of OpenShift Container Platform on {ibmcloudBMProductName}, address the following prerequisites and requirements. 1.1. Setting up IBM Cloud Bare Metal (Classic) infrastructure To deploy an OpenShift Container Platform cluster on {ibmcloudBMRegProductName} infrastructure, you must first provision the IBM Cloud nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. The provisioning network is required. You can customize IBM Cloud nodes using the IBM Cloud API. When creating IBM Cloud nodes, you must consider the following requirements. Use one data center per cluster All nodes in the OpenShift Container Platform cluster must run in the same IBM Cloud data center. Create public and private VLANs Create all nodes with a single public VLAN and a single private VLAN. Ensure subnets have sufficient IP addresses IBM Cloud public VLAN subnets use a /28 prefix by default, which provides 16 IP addresses. That is sufficient for a cluster consisting of three control plane nodes, four worker nodes, and two IP addresses for the API VIP and Ingress VIP on the baremetal network. For larger clusters, you might need a smaller prefix. IBM Cloud private VLAN subnets use a /26 prefix by default, which provides 64 IP addresses. {ibmcloudBMProductName} uses private network IP addresses to access the Baseboard Management Controller (BMC) of each node. OpenShift Container Platform creates an additional subnet for the provisioning network. Network traffic for the provisioning network subnet routes through the private VLAN. For larger clusters, you might need a smaller prefix. Table 1.1. IP addresses per prefix IP addresses Prefix 32 /27 64 /26 128 /25 256 /24 Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is a non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. baremetal : The baremetal network is a routable network. You can use any NIC order to interface with the baremetal network, provided it is not the NIC specified in the provisioningNetworkInterface configuration setting or the NIC associated to a node's bootMACAddress configuration setting for the provisioning network. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. For example: NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> In the example, NIC1 on all control plane and worker nodes connects to the non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC2 on all control plane and worker nodes connects to the routable baremetal network. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. 2 Note Ensure PXE is enabled on the NIC used for the provisioning network and is disabled on all other NICs. Configuring canonical names Clients access the OpenShift Container Platform cluster nodes over the baremetal network. Configure IBM Cloud subdomains or subzones where the canonical name extension is the cluster name. For example: Creating DNS entries You must create DNS A record entries resolving to unused IP addresses on the public subnet for the following: Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Control plane and worker nodes already have DNS entries after provisioning. The following table provides an example of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The host names of the control plane and worker nodes are examples, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Provisioner node provisioner.<cluster_name>.<domain> <ip> Master-0 openshift-master-0.<cluster_name>.<domain> <ip> Master-1 openshift-master-1.<cluster_name>.<domain> <ip> Master-2 openshift-master-2.<cluster_name>.<domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<domain> <ip> OpenShift Container Platform includes functionality that uses cluster membership information to generate A records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. Important After provisioning the IBM Cloud nodes, you must create a DNS entry for the api.<cluster_name>.<domain> domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the api.<cluster_name>.<domain> domain name in the external DNS server prevents worker nodes from joining the cluster. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. Configure a DHCP server {ibmcloudBMProductName} does not run DHCP on the public or private VLANs. After provisioning IBM Cloud nodes, you must set up a DHCP server for the public VLAN, which corresponds to OpenShift Container Platform's baremetal network. Note The IP addresses allocated to each node do not need to match the IP addresses allocated by the {ibmcloudBMProductName} provisioning system. See the "Configuring the public subnet" section for details. Ensure BMC access privileges The "Remote management" page for each node on the dashboard contains the node's intelligent platform management interface (IPMI) credentials. The default IPMI privileges prevent the user from making certain boot target changes. You must change the privilege level to OPERATOR so that Ironic can make those changes. In the install-config.yaml file, add the privilegelevel parameter to the URLs used to configure each BMC. See the "Configuring the install-config.yaml file" section for additional details. For example: ipmi://<IP>:<port>?privilegelevel=OPERATOR Alternatively, contact IBM Cloud support and request that they increase the IPMI privileges to ADMINISTRATOR for each node. Create bare metal servers Create bare metal servers in the IBM Cloud dashboard by navigating to Create resource Bare Metal Servers for Classic . Alternatively, you can create bare metal servers with the ibmcloud CLI utility. For example: USD ibmcloud sl hardware create --hostname <SERVERNAME> \ --domain <DOMAIN> \ --size <SIZE> \ --os <OS-TYPE> \ --datacenter <DC-NAME> \ --port-speed <SPEED> \ --billing <BILLING> See Installing the stand-alone IBM Cloud CLI for details on installing the IBM Cloud CLI. Note IBM Cloud servers might take 3-5 hours to become available. | [
"<cluster_name>.<domain>",
"test-cluster.example.com",
"ipmi://<IP>:<port>?privilegelevel=OPERATOR",
"ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_ibm_cloud_bare_metal_classic/install-ibm-cloud-prerequisites |
2.7. Considerations for Using Conga | 2.7. Considerations for Using Conga When using Conga to configure and manage your Red Hat Cluster, make sure that each computer running luci (the Conga user interface server) is running on the same network that the cluster is using for cluster communication. Otherwise, luci cannot configure the nodes to communicate on the right network. If the computer running luci is on another network (for example, a public network rather than a private network that the cluster is communicating on), contact an authorized Red Hat support representative to make sure that the appropriate host name is configured for each cluster node. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-conga-considerations-CA |
Chapter 7. Firewalls | Chapter 7. Firewalls Information security is commonly thought of as a process and not a product. However, standard security implementations usually employ some form of dedicated mechanism to control access privileges and restrict network resources to users who are authorized, identifiable, and traceable. Red Hat Enterprise Linux includes several powerful tools to assist administrators and security engineers with network-level access control issues. Along with VPN solutions, such as IPsec (discussed in Chapter 6, Virtual Private Networks ), firewalls are one of the core components of a network security implementation. Several vendors market firewall solutions catering to all levels of the marketplace: from home users protecting one PC to data center solutions safeguarding vital enterprise information. Firewalls can be standalone hardware solutions, such as firewall appliances by Cisco, Nokia, and Sonicwall. There are also proprietary software firewall solutions developed for home and business markets by vendors such as Checkpoint, McAfee, and Symantec. Apart from the differences between hardware and software firewalls, there are also differences in the way firewalls function that separate one solution from another. Table 7.1, "Firewall Types" details three common types of firewalls and how they function: Table 7.1. Firewall Types Method Description Advantages Disadvantages NAT Network Address Translation (NAT) places private IP subnetworks behind one or a small pool of public IP addresses, masquerading all requests to one source rather than several. · Can be configured transparently to machines on a LAN · Protection of many machines and services behind one or more external IP address(es) simplifies administration duties · Restriction of user access to and from the LAN can be configured by opening and closing ports on the NAT firewall/gateway · Cannot prevent malicious activity once users connect to a service outside of the firewall Packet Filter A packet filtering firewall reads each data packet that passes within and outside of a LAN. It can read and process packets by header information and filters the packet based on sets of programmable rules implemented by the firewall administrator. The Linux kernel has built-in packet filtering functionality through the Netfilter kernel subsystem. · Customizable through the iptables front-end utility · Does not require any customization on the client side, as all network activity is filtered at the router level rather than the application level · Since packets are not transmitted through a proxy, network performance is faster due to direct connection from client to remote host · Cannot filter packets for content like proxy firewalls · Processes packets at the protocol layer, but cannot filter packets at an application layer · Complex network architectures can make establishing packet filtering rules difficult, especially if coupled with IP masquerading or local subnets and DMZ networks Proxy Proxy firewalls filter all requests of a certain protocol or type from LAN clients to a proxy machine, which then makes those requests to the Internet on behalf of the local client. A proxy machine acts as a buffer between malicious remote users and the internal network client machines. · Gives administrators control over what applications and protocols function outside of the LAN · Some proxy servers can cache frequently-accessed data locally rather than having to use the Internet connection to request it, which is convenient for cutting down on unnecessary bandwidth consumption · Proxy services can be logged and monitored closely, allowing tighter control over resource utilization on the network · Proxies are often application specific (HTTP, Telnet, etc.) or protocol restricted (most proxies work with TCP connected services only) · Application services cannot run behind a proxy, so your application servers must use a separate form of network security · Proxies can become a network bottleneck, as all requests and transmissions are passed through one source rather than directly from a client to a remote service 7.1. Netfilter and iptables The Linux kernel features a powerful networking subsystem called Netfilter . The Netfilter subsystem provides stateful or stateless packet filtering as well as NAT and IP masquerading services. Netfilter also has the ability to mangle IP header information for advanced routing and connection state management. Netfilter is controlled through the iptables utility. 7.1.1. iptables Overview The power and flexibility of Netfilter is implemented through the iptables interface. This command line tool is similar in syntax to its predecessor, ipchains ; however, iptables uses the Netfilter subsystem to enhance network connection, inspection, and processing; whereas ipchains used intricate rule sets for filtering source and destination paths, as well as connection ports for both. iptables features advanced logging, pre- and post-routing actions, network address translation, and port forwarding all in one command line interface. This section provides an overview of iptables . For more detailed information about iptables , refer to the Reference Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-fw |
Chapter 5. Backing OpenShift Container Platform applications with OpenShift Data Foundation | Chapter 5. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads Deployments . Click Workloads Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/backing-openshift-container-platform-applications-with-openshift-data-foundation_rhodf |
Preface | Preface Red Hat OpenShift Data Foundation supports deployment on any platform that you provision including bare metal, virtualized, and cloud environments. Both internal and external OpenShift Data Foundation clusters are supported on these environments. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_any_platform/preface-agnostic |
Chapter 3. Basic Security | Chapter 3. Basic Security This chapter describes the basic steps to configure security before you start Karaf for the first time. By default, Karaf is secure, but none of its services are remotely accessible. This chapter explains how to enable secure access to the ports exposed by Karaf. 3.1. Configuring Basic Security 3.1.1. Overview The Apache Karaf runtime is secured against network attack by default, because all of its exposed ports require user authentication and no users are defined initially. In other words, the Apache Karaf runtime is remotely inaccessible by default. If you want to access the runtime remotely, you must first customize the security configuration, as described here. 3.1.2. Before you start the container If you want to enable remote access to the Karaf container, you must create a secure JAAS user before starting the container: 3.1.3. Create a secure JAAS user By default, no JAAS users are defined for the container, which effectively disables remote access (it is impossible to log on). To create a secure JAAS user, edit the InstallDir/etc/users.properties file and add a new user field, as follows: Where Username and Password are the new user credentials. The admin role gives this user the privileges to access all administration and management functions of the container. Do not define a numeric username with a leading zero. Such usernames will always cause a login attempt to fail. This is because the Karaf shell, which the console uses, drops leading zeros when the input appears to be a number. For example: Warning It is strongly recommended that you define custom user credentials with a strong password. 3.1.4. Role-based access control The Karaf container supports role-based access control, which regulates access through the JMX protocol, the Karaf command console, and the Fuse Management console. When assigning roles to users, you can choose from the set of standard roles, which provide the levels of access described in Table 3.1, "Standard Roles for Access Control" . Table 3.1. Standard Roles for Access Control Roles Description viewer Grants read-only access to the container. manager Grants read-write access at the appropriate level for ordinary users, who want to deploy and run applications. But blocks access to sensitive container configuration settings. admin Grants unrestricted access to the container. ssh Grants permission for remote console access through the SSH port. For more details about role-based access control, see Role-Based Access Control . 3.1.5. Ports exposed by the Apache Karaf container The following ports are exposed by the container: Console port - enables remote control of a container instance, through Apache Karaf shell commands. This port is enabled by default and is secured both by JAAS authentication and by SSH. JMX port - enables management of the container through the JMX protocol. This port is enabled by default and is secured by JAAS authentication. Web console port - provides access to an embedded Undertow container that can host Web console servlets. By default, the Fuse Console is installed in the Undertow container. 3.1.6. Enabling the remote console port You can access the remote console port whenever both of the following conditions are true: JAAS is configured with at least one set of login credentials. The Karaf runtime has not been started in client mode (client mode disables the remote console port completely). For example, to log on to the remote console port from the same machine where the container is running, enter the following command: Where the Username and Password are the credentials of a JAAS user with the ssh role. When accessing the Karaf console through the remote port, your privileges depend on the roles assigned to the user in the etc/users.properties file. If you want access to the complete set of console commands, the user account must have the admin role. 3.1.7. Strengthening security on the remote console port You can employ the following measures to strengthen security on the remote console port: Make sure that the JAAS user credentials have strong passwords. Customize the X.509 certificate (replace the Java keystore file, InstallDir/etc/host.key , with a custom key pair). 3.1.8. Enabling the JMX port The JMX port is enabled by default and secured by JAAS authentication. In order to access the JMX port, you must have configured JAAS with at least one set of login credentials. To connect to the JMX port, open a JMX client (for example, jconsole ) and connect to the following JMX URI: You must also provide valid JAAS credentials to the JMX client in order to connect. Note In general, the tail of the JMX URI has the format /karaf- ContainerName . If you change the container name from root to some other name, you must modify the JMX URI accordingly. 3.1.9. Strengthening security on the Fuse Console port The Fuse Console is already secured by JAAS authentication. To add SSL security, see Securing the Undertow HTTP Server . | [
"Username=Password,admin",
"karaf@root> echo 0123 123 karaf@root> echo 00.123 0.123 karaf@root>",
"./client -u Username -p Password",
"service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/esbruntimebasicsec |
Evaluating AMQ Streams on OpenShift | Evaluating AMQ Streams on OpenShift Red Hat AMQ 2021.q2 For use with AMQ Streams 1.7 on OpenShift Container Platform | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/evaluating_amq_streams_on_openshift/index |
Chapter 6. Securing access to Kafka | Chapter 6. Securing access to Kafka Secure your Kafka cluster by managing the access a client has to Kafka brokers. Specify configuration options to secure Kafka brokers and clients A secure connection between Kafka brokers and clients can encompass the following: Encryption for data exchange Authentication to prove identity Authorization to allow or decline actions executed by users The authentication and authorization mechanisms specified for a client must match those specified for the Kafka brokers. 6.1. Listener configuration Encryption and authentication in Kafka brokers is configured per listener. For more information about Kafka listener configuration, see Section 5.3.1, "Listeners" . Each listener in the Kafka broker is configured with its own security protocol. The configuration property listener.security.protocol.map defines which listener uses which security protocol. It maps each listener name to its security protocol. Supported security protocols are: PLAINTEXT Listener without any encryption or authentication. SSL Listener using TLS encryption and, optionally, authentication using TLS client certificates. SASL_PLAINTEXT Listener without encryption but with SASL-based authentication. SASL_SSL Listener with TLS-based encryption and SASL-based authentication. Given the following listeners configuration: the listener.security.protocol.map might look like this: This would configure the listener INT1 to use unencrypted connections with SASL authentication, the listener INT2 to use encrypted connections with SASL authentication and the REPLICATION interface to use TLS encryption (possibly with TLS client authentication). The same security protocol can be used multiple times. The following example is also a valid configuration: Such a configuration would use TLS encryption and TLS authentication (optional) for all interfaces. 6.2. TLS Encryption Kafka supports TLS for encrypting communication with Kafka clients. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Keystore (JKS) format. A path to this file is set in the ssl.keystore.location property. The ssl.keystore.password property should be used to set the password protecting the keystore. For example: In some cases, an additional password is used to protect the private key. Any such password can be set using the ssl.key.password property. Kafka is able to use keys signed by certification authorities as well as self-signed keys. Using keys signed by certification authorities should always be the preferred method. In order to allow clients to verify the identity of the Kafka broker they are connecting to, the certificate should always contain the advertised hostname(s) as its Common Name (CN) or in the Subject Alternative Names (SAN). It is possible to use different SSL configurations for different listeners. All options starting with ssl. can be prefixed with listener.name.<NameOfTheListener>. , where the name of the listener has to be always in lowercase. This will override the default SSL configuration for that specific listener. The following example shows how to use different SSL configurations for different listeners: Additional TLS configuration options In addition to the main TLS configuration options described above, Kafka supports many options for fine-tuning the TLS configuration. For example, to enable or disable TLS / SSL protocols or cipher suites: ssl.cipher.suites List of enabled cipher suites. Each cipher suite is a combination of authentication, encryption, MAC and key exchange algorithms used for the TLS connection. By default, all available cipher suites are enabled. ssl.enabled.protocols List of enabled TLS / SSL protocols. Defaults to TLSv1.2,TLSv1.1,TLSv1 . 6.2.1. Enabling TLS encryption This procedure describes how to enable encryption in Kafka brokers. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Generate TLS certificates for all Kafka brokers in your cluster. The certificates should have their advertised and bootstrap addresses in their Common Name or Subject Alternative Name. Edit the Kafka configuration properties file on all cluster nodes for the following: Change the listener.security.protocol.map field to specify the SSL protocol for the listener where you want to use TLS encryption. Set the ssl.keystore.location option to the path to the JKS keystore with the broker certificate. Set the ssl.keystore.password option to the password you used to protect the keystore. For example: (Re)start the Kafka brokers 6.3. Authentication To authenticate client connections to your Kafka cluster, the following options are available: TLS client authentication TLS (Transport Layer Security) using X.509 certificates on encrypted connections Kafka SASL Kafka SASL (Simple Authentication and Security Layer) using supported authentication mechanisms OAuth 2.0 OAuth 2.0 token-based authentication SASL authentication supports various mechanisms for both plain unencrypted connections and TLS connections: PLAIN ― Authentication based on usernames and passwords. SCRAM-SHA-256 and SCRAM-SHA-512 ― Authentication using Salted Challenge Response Authentication Mechanism (SCRAM). GSSAPI ― Authentication against a Kerberos server. Warning The PLAIN mechanism sends usernames and passwords over the network in an unencrypted format. It should only be used in combination with TLS encryption. 6.3.1. Enabling TLS client authentication Enable TLS client authentication in Kafka brokers to enhance security for connections to Kafka nodes already using TLS encryption. Use the ssl.client.auth property to set TLS authentication with one of these values: none ― TLS client authentication is off (default) requested ― Optional TLS client authentication required ― Clients must authenticate using a TLS client certificate When a client authenticates using TLS client authentication, the authenticated principal name is derived from the distinguished name in the client certificate. For instance, a user with a certificate having a distinguished name CN=someuser will be authenticated with the principal CN=someuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown . This principal name provides a unique identifier for the authenticated user or entity. When TLS client authentication is not used, and SASL is disabled, the principal name defaults to ANONYMOUS . Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. TLS encryption is enabled . Procedure Prepare a JKS (Java Keystore ) truststore containing the public key of the CA (Certification Authority) used to sign the user certificates. Edit the Kafka configuration properties file on all cluster nodes as follows: Specify the path to the JKS truststore using the ssl.truststore.location property. If the truststore is password-protected, set the password using ssl.truststore.password property. Set the ssl.client.auth property to required . TLS client authentication configuration (Re)start the Kafka brokers. 6.3.2. Enabling SASL PLAIN client authentication Enable SASL PLAIN authentication in Kafka to enhance security for connections to Kafka nodes. SASL authentication is enabled through the Java Authentication and Authorization Service (JAAS) using the KafkaServer JAAS context. You can define the JAAS configuration in a dedicated file or directly in the Kafka configuration. The recommended location for the dedicated file is /opt/kafka/config/jaas.conf . Ensure that the file is readable by the kafka user. Keep the JAAS configuration file in sync on all Kafka nodes. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Edit or create the /opt/kafka/config/jaas.conf JAAS configuration file to enable the PlainLoginModule and specify the allowed usernames and passwords. Make sure this file is the same on all Kafka brokers. JAAS configuration Edit the Kafka configuration properties file on all cluster nodes as follows: Enable SASL PLAIN authentication on specific listeners using the listener.security.protocol.map property. Specify SASL_PLAINTEXT or SASL_SSL . Set the sasl.enabled.mechanisms property to PLAIN . SASL plain configuration (Re)start the Kafka brokers using the KAFKA_OPTS environment variable to pass the JAAS configuration to Kafka brokers: 6.3.3. Enabling SASL SCRAM client authentication Enable SASL SCRAM authentication in Kafka to enhance security for connections to Kafka nodes. SASL authentication is enabled through the Java Authentication and Authorization Service (JAAS) using the KafkaServer JAAS context. You can define the JAAS configuration in a dedicated file or directly in the Kafka configuration. The recommended location for the dedicated file is /opt/kafka/config/jaas.conf . Ensure that the file is readable by the kafka user. Keep the JAAS configuration file in sync on all Kafka nodes. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Edit or create the /opt/kafka/config/jaas.conf JAAS configuration file to enable the ScramLoginModule . Make sure this file is the same on all Kafka brokers. JAAS configuration Edit the Kafka configuration properties file on all cluster nodes as follows: Enable SASL SCRAM authentication on specific listeners using the listener.security.protocol.map property. Specify SASL_PLAINTEXT or SASL_SSL . Set the sasl.enabled.mechanisms option to SCRAM-SHA-256 or SCRAM-SHA-512 . For example: (Re)start the Kafka brokers using the KAFKA_OPTS environment variable to pass the JAAS configuration to Kafka brokers. 6.3.4. Enabling multiple SASL mechanisms When using SASL authentication, you can enable more than one mechanism. Kafka can use more than one SASL mechanism simultaneously. When multiple mechanisms are enabled, you can choose the mechanism specific clients use. To use more than one mechanism, you set up the configuration required for each mechanism. You can add different KafkaServer JAAS configurations to the same context and enable more than one mechanism in the Kafka configuration as a comma-separated list using the sasl.mechanism.inter.broker.protocol property. JAAS configuration for more than one SASL mechanism SASL mechanisms enabled 6.3.5. Enabling SASL for inter-broker authentication Enable SASL SCRAM authentication between Kafka nodes to enhance security for inter-broker connections. As well as using SASL authentication for client connections to a Kafka cluster, you can also use SASL for inter-broker authentication. Unlike SASL for client connections, you can only choose one mechanism for inter-broker communication. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. If you are using a SCRAM mechanism, register SCRAM credentials on the Kafka cluster. For all nodes in the Kafka cluster, use the kafka-storage.sh tool to add the inter-broker SASL SCRAM user to the __cluster_metadata topic. This ensures that the credentials for authentication are updated for bootstrapping before the Kafka cluster is running. Registering an inter-broker SASL SCRAM user bin/kafka-storage.sh format \ --config /opt/kafka/config/kraft/server.properties \ --cluster-id 1 \ --release-version 3.7 \ --add-scram 'SCRAM-SHA-512=[name=kafka, password=changeit]' \ --ignore formatted Procedure Specify an inter-broker SASL mechanism in the Kafka configuration using the sasl.mechanism.inter.broker.protocol property. Inter-broker SASL mechanism Specify the username and password for inter-broker communication in the KafkaServer JAAS context using the username and password fields. Inter-broker JAAS context 6.3.6. Adding SASL SCRAM users This procedure outlines the steps to register new users for authentication using SASL SCRAM in Kafka. SASL SCRAM authentication enhances the security of client connections. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. SASL SCRAM authentication is enabled . Procedure Use the kafka-configs.sh tool to add new SASL SCRAM users. /opt/kafka/kafka-configs.sh \ --bootstrap-server <broker_host>:<port> \ --alter \ --add-config 'SCRAM-SHA-512=[password=<password>]' \ --entity-type users --entity-name <username> For example: /opt/kafka/kafka-configs.sh \ --bootstrap-server localhost:9092 \ --alter \ --add-config 'SCRAM-SHA-512=[password=123456]' \ --entity-type users \ --entity-name user1 6.3.7. Deleting SASL SCRAM users This procedure outlines the steps to remove users registered for authentication using SASL SCRAM in Kafka. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. SASL SCRAM authentication is enabled . Procedure Use the kafka-configs.sh tool to delete SASL SCRAM users. /opt/kafka/bin/kafka-configs.sh \ --bootstrap-server <broker_host>:<port> \ --alter \ --delete-config 'SCRAM-SHA-512' \ --entity-type users \ --entity-name <username> For example: /opt/kafka/bin/kafka-configs.sh \ --bootstrap-server localhost:9092 \ --alter \ --delete-config 'SCRAM-SHA-512' \ --entity-type users \ --entity-name user1 6.3.8. Enabling Kerberos (GSSAPI) authentication Streams for Apache Kafka supports the use of the Kerberos (GSSAPI) authentication protocol for secure single sign-on access to your Kafka cluster. GSSAPI is an API wrapper for Kerberos functionality, insulating applications from underlying implementation changes. Kerberos is a network authentication system that allows clients and servers to authenticate to each other by using symmetric encryption and a trusted third party, the Kerberos Key Distribution Centre (KDC). This procedure shows how to configure Streams for Apache Kafka so that Kafka clients can access Kafka using Kerberos (GSSAPI) authentication. The procedure assumes that a Kerberos krb5 resource server has been set up on a Red Hat Enterprise Linux host. The procedure shows, with examples, how to configure: Service principals Kafka brokers to use the Kerberos login Producer and consumer clients to access Kafka using Kerberos authentication The instructions describe Kerberos set up for a Kafka installation on a single host, with additional configuration for a producer and consumer client. Prerequisites To be able to configure Kafka to authenticate and authorize Kerberos credentials, you need the following: Access to a Kerberos server A Kerberos client on each Kafka broker host Add service principals for authentication From your Kerberos server, create service principals (users) for Kafka brokers, and Kafka producer and consumer clients. Service principals must take the form SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-REALM . Create the service principals, and keytabs that store the principal keys, through the Kerberos KDC. Make sure the domain name in the Kerberos principal is in uppercase. For example: kafka/[email protected] producer1/[email protected] consumer1/[email protected] Create a directory on the host and add the keytab files: For example: /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab Ensure the kafka user can access the directory: chown kafka:kafka -R /opt/kafka/krb5 Configure the Kafka broker server to use a Kerberos login Configure Kafka to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for kafka . Modify the opt/kafka/config/jaas.conf file with the following elements: KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; Configure each broker in the Kafka cluster by modifying the listener configuration in the config/server.properties file so the listeners use the SASL/GSSAPI login. Add the SASL protocol to the map of security protocols for the listener, and remove any unwanted protocols. For example: # ... broker.id=0 # ... listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION # ... listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 # .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5 # ... 1 Two listeners are configured: a secure listener for general-purpose communications with clients (supporting TLS for communications), and a replication listener for inter-broker communications. 2 For TLS-enabled listeners, the protocol name is SASL_PLAINTEXT. For non-TLS-enabled connectors, the protocol name is SASL_PLAINTEXT. If SSL is not required, you can remove the ssl.* properties. 3 SASL mechanism for Kerberos authentication is GSSAPI . 4 Kerberos authentication for inter-broker communication. 5 The name of the service used for authentication requests is specified to distinguish it from other services that may also be using the same Kerberos configuration. Start the Kafka broker, with JVM parameters to specify the Kerberos login configuration: su - kafka export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties Configure Kafka producer and consumer clients to use Kerberos authentication Configure Kafka producer and consumer clients to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for producer1 and consumer1 . Add the Kerberos configuration to the producer or consumer configuration file. For example: /opt/kafka/config/producer.properties # ... sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ 4 useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/producer1.keytab" \ principal="producer1/[email protected]"; # ... 1 Configuration for Kerberos (GSSAPI) authentication. 2 Kerberos uses the SASL plaintext (username/password) security protocol. 3 The service principal (user) for Kafka that was configured in the Kerberos KDC. 4 Configuration for the JAAS using the same properties defined in jaas.conf . /opt/kafka/config/consumer.properties # ... sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/consumer1.keytab" \ principal="consumer1/[email protected]"; # ... Run the clients to verify that you can send and receive messages from the Kafka brokers. Producer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Consumer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Additional resources Kerberos man pages: krb5.conf , kinit , klist , and kdestroy 6.4. Authorization Authorization in Kafka brokers is implemented using authorizer plugins. In this section we describe how to use the StandardAuthorizer plugin provided with Kafka. Alternatively, you can use your own authorization plugins. For example, if you are using OAuth 2.0 token-based authentication , you can use OAuth 2.0 authorization . 6.4.1. Enabling an ACL authorizer Edit the Kafka configuration properties file to add an ACL authorizer. Enable the authorizer by specifying its fully-qualified name in the authorizer.class.name property: Enabling the authorizer authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer 6.4.1.1. ACL rules An ACL authorizer uses ACL rules to manage access to Kafka brokers. ACL rules are defined in the following format: Principal P is allowed / denied <operation> O on <kafka_resource> R from host H For example, a rule might be set so that user John can view the topic comments from host 127.0.0.1 . Host is the IP address of the machine that John is connecting from. In most cases, the user is a producer or consumer application: Consumer01 can write to the consumer group accounts from host 127.0.0.1 If ACL rules are not present for a given resource, all actions are denied. This behavior can be changed by setting the property allow.everyone.if.no.acl.found to true in the Kafka configuration file. 6.4.1.2. Principals A principal represents the identity of a user. The format of the ID depends on the authentication mechanism used by clients to connect to Kafka: User:ANONYMOUS when connected without authentication. User:<username> when connected using simple authentication mechanisms, such as PLAIN or SCRAM. For example User:admin or User:user1 . User:<DistinguishedName> when connected using TLS client authentication. For example User:CN=user1,O=MyCompany,L=Prague,C=CZ . User:<Kerberos username> when connected using Kerberos. The DistinguishedName is the distinguished name from the client certificate. The Kerberos username is the primary part of the Kerberos principal, which is used by default when connecting using Kerberos. You can use the sasl.kerberos.principal.to.local.rules property to configure how the Kafka principal is built from the Kerberos principal. 6.4.1.3. Authentication of users To use authorization, you need to have authentication enabled and used by your clients. Otherwise, all connections will have the principal User:ANONYMOUS . For more information on methods of authentication, see Section 6.3, "Authentication" . 6.4.1.4. Super users Super users are allowed to take all actions regardless of the ACL rules. Super users are defined in the Kafka configuration file using the property super.users . For example: 6.4.1.5. Replica broker authentication When authorization is enabled, it is applied to all listeners and all connections. This includes the inter-broker connections used for replication of data between brokers. If enabling authorization, therefore, ensure that you use authentication for inter-broker connections and give the users used by the brokers sufficient rights. For example, if authentication between brokers uses the kafka-broker user, then super user configuration must include the username super.users=User:kafka-broker . Note For more information on the operations on Kafka resources you can control with ACLs, see the Apache Kafka documentation . 6.4.2. Adding ACL rules When using an ACL authorizer to control access to Kafka based on Access Control Lists (ACLs), you can add new ACL rules using the kafka-acls.sh utility. Use kafka-acls.sh parameter options to add, list and remove ACL rules, and perform other functions. The parameters require a double-hyphen convention, such as --add . Prerequisites Users have been created and granted appropriate permissions to access Kafka resources. Streams for Apache Kafka is installed on each host , and the configuration files are available. Authorization is enabled in Kafka brokers. Procedure Run kafka-acls.sh with the --add option. Examples: Allow user1 and user2 access to read from myTopic using the MyConsumerGroup consumer group. opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2 Deny user1 access to read myTopic from IP address host 127.0.0.1 . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1 Add user1 as the consumer of myTopic with MyConsumerGroup . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1 6.4.3. Listing ACL rules When using an ACL authorizer to control access to Kafka based on Access Control Lists (ACLs), you can list existing ACL rules using the kafka-acls.sh utility. Prerequisites ACLs have been added . Procedure Run kafka-acls.sh with the --list option. For example: opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic myTopic Current ACLs for resource `Topic:myTopic`: User:user1 has Allow permission for operations: Read from hosts: * User:user2 has Allow permission for operations: Read from hosts: * User:user2 has Deny permission for operations: Read from hosts: 127.0.0.1 User:user1 has Allow permission for operations: Describe from hosts: * User:user2 has Allow permission for operations: Describe from hosts: * User:user2 has Deny permission for operations: Describe from hosts: 127.0.0.1 6.4.4. Removing ACL rules When using an ACL authorizer to control access to Kafka based on Access Control Lists (ACLs), you can remove existing ACL rules using the kafka-acls.sh utility. Prerequisites ACLs have been added . Procedure Run kafka-acls.sh with the --remove option. Examples: Remove the ACL allowing Allow user1 and user2 access to read from myTopic using the MyConsumerGroup consumer group. opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2 Remove the ACL adding user1 as the consumer of myTopic with MyConsumerGroup . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1 Remove the ACL denying user1 access to read myTopic from IP address host 127.0.0.1 . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1 6.5. Using OAuth 2.0 token-based authentication Streams for Apache Kafka supports the use of OAuth 2.0 authentication using the OAUTHBEARER and PLAIN mechanisms. OAuth 2.0 enables standardized token-based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources. You can configure OAuth 2.0 authentication, then OAuth 2.0 authorization . Kafka brokers and clients both need to be configured to use OAuth 2.0. OAuth 2.0 authentication can also be used in conjunction with simple or OPA-based Kafka authorization. Using OAuth 2.0 authentication, application clients can access resources on application servers (called resource servers ) without exposing account credentials. The application client passes an access token as a means of authenticating, which application servers can also use to determine the level of access to grant. The authorization server handles the granting of access and inquiries about access. In the context of Streams for Apache Kafka: Kafka brokers act as OAuth 2.0 resource servers Kafka clients act as OAuth 2.0 application clients Kafka clients authenticate to Kafka brokers. The brokers and clients communicate with the OAuth 2.0 authorization server, as necessary, to obtain or validate access tokens. For a deployment of Streams for Apache Kafka, OAuth 2.0 integration provides: Server-side OAuth 2.0 support for Kafka brokers Client-side OAuth 2.0 support for Kafka MirrorMaker, Kafka Connect, and the Kafka Bridge Streams for Apache Kafka on RHEL includes two OAuth 2.0 libraries: kafka-oauth-client Provides a custom login callback handler class named io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler . To handle the OAUTHBEARER authentication mechanism, use the login callback handler with the OAuthBearerLoginModule provided by Apache Kafka. kafka-oauth-common A helper library that provides some of the functionality needed by the kafka-oauth-client library. The provided client libraries also have dependencies on some additional third-party libraries, such as: keycloak-core , jackson-databind , and slf4j-api . We recommend using a Maven project to package your client to ensure that all the dependency libraries are included. Dependency libraries might change in future versions. Additional resources OAuth 2.0 site 6.5.1. OAuth 2.0 authentication mechanisms Streams for Apache Kafka supports the OAUTHBEARER and PLAIN mechanisms for OAuth 2.0 authentication. Both mechanisms allow Kafka clients to establish authenticated sessions with Kafka brokers. The authentication flow between clients, the authorization server, and Kafka brokers is different for each mechanism. We recommend that you configure clients to use OAUTHBEARER whenever possible. OAUTHBEARER provides a higher level of security than PLAIN because client credentials are never shared with Kafka brokers. Consider using PLAIN only with Kafka clients that do not support OAUTHBEARER. You configure Kafka broker listeners to use OAuth 2.0 authentication for connecting clients. If necessary, you can use the OAUTHBEARER and PLAIN mechanisms on the same oauth listener. The properties to support each mechanism must be explicitly specified in the oauth listener configuration. OAUTHBEARER overview To use OAUTHBEARER, set sasl.enabled.mechanisms to OAUTHBEARER in the OAuth authentication listener configuration for the Kafka broker. For detailed configuration, see Section 6.5.2, "OAuth 2.0 Kafka broker configuration" . listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER Many Kafka client tools use libraries that provide basic support for OAUTHBEARER at the protocol level. To support application development, Streams for Apache Kafka provides an OAuth callback handler for the upstream Kafka Client Java libraries (but not for other libraries). Therefore, you do not need to write your own callback handlers. An application client can use the callback handler to provide the access token. Clients written in other languages, such as Go, must use custom code to connect to the authorization server and obtain the access token. With OAUTHBEARER, the client initiates a session with the Kafka broker for credentials exchange, where credentials take the form of a bearer token provided by the callback handler. Using the callbacks, you can configure token provision in one of three ways: Client ID and Secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time A long-lived refresh token, obtained manually at configuration time Note OAUTHBEARER authentication can only be used by Kafka clients that support the OAUTHBEARER mechanism at the protocol level. PLAIN overview To use PLAIN, add PLAIN to the value of sasl.enabled.mechanisms . listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN PLAIN is a simple authentication mechanism used by all Kafka client tools. To enable PLAIN to be used with OAuth 2.0 authentication, Streams for Apache Kafka provides OAuth 2.0 over PLAIN server-side callbacks. Client credentials are handled centrally behind a compliant authorization server, similar to when OAUTHBEARER authentication is used. When used with the OAuth 2.0 over PLAIN callbacks, Kafka clients authenticate with Kafka brokers using either of the following methods: Client ID and secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time For both methods, the client must provide the PLAIN username and password properties to pass credentials to the Kafka broker. The client uses these properties to pass a client ID and secret or username and access token. Client IDs and secrets are used to obtain access tokens. Access tokens are passed as password property values. You pass the access token with or without an USDaccessToken: prefix. If you configure a token endpoint ( oauth.token.endpoint.uri ) in the listener configuration, you need the prefix. If you don't configure a token endpoint ( oauth.token.endpoint.uri ) in the listener configuration, you don't need the prefix. The Kafka broker interprets the password as a raw access token. If the password is set as the access token, the username must be set to the same principal name that the Kafka broker obtains from the access token. You can specify username extraction options in your listener using the oauth.username.claim , oauth.fallback.username.claim , oauth.fallback.username.prefix , and oauth.userinfo.endpoint.uri properties. The username extraction process also depends on your authorization server; in particular, how it maps client IDs to account names. Note OAuth over PLAIN does not support passing a username and password (password grants) using the (deprecated) OAuth 2.0 password grant mechanism. 6.5.1.1. Configuring OAuth 2.0 with properties or variables You can configure OAuth 2.0 settings using Java Authentication and Authorization Service (JAAS) properties or environment variables. JAAS properties are configured in the server.properties configuration file, and passed as key-values pairs of the listener.name. LISTENER-NAME .oauthbearer.sasl.jaas.config property. If using environment variables, you still need to provide the listener.name. LISTENER-NAME .oauthbearer.sasl.jaas.config property in the server.properties file, but you can omit the other JAAS properties. You can use capitalized or upper-case environment variable naming conventions. The Streams for Apache Kafka OAuth 2.0 libraries use properties that start with: oauth. to configure authentication strimzi. to configure OAuth 2.0 authorization Additional resources OAuth 2.0 Kafka broker configuration 6.5.2. OAuth 2.0 Kafka broker configuration Kafka broker configuration for OAuth 2.0 authentication involves: Creating the OAuth 2.0 client in the authorization server Configuring OAuth 2.0 authentication in the Kafka cluster Note In relation to the authorization server, Kafka brokers and Kafka clients are both regarded as OAuth 2.0 clients. 6.5.2.1. OAuth 2.0 client configuration on an authorization server To configure a Kafka broker to validate the token received during session initiation, the recommended approach is to create an OAuth 2.0 client definition in an authorization server, configured as confidential , with the following client credentials enabled: Client ID of kafka-broker (for example) Client ID and secret as the authentication mechanism Note You only need to use a client ID and secret when using a non-public introspection endpoint of the authorization server. The credentials are not typically required when using public authorization server endpoints, as with fast local JWT token validation. 6.5.2.2. OAuth 2.0 authentication configuration in the Kafka cluster To use OAuth 2.0 authentication in the Kafka cluster, you enable an OAuth authentication listener configuration for your Kafka cluster, in the Kafka server.properties file. A minimum configuration is required. You can also configure a TLS listener, where TLS is used for inter-broker communication. You can configure the broker for token validation by the authorization server using one of the following methods: Fast local token validation: a JWKS endpoint in combination with signed JWT-formatted access tokens Introspection endpoint You can configure OAUTHBEARER or PLAIN authentication, or both. The following example shows a minimum configuration that applies a global listener configuration, which means that inter-broker communication goes through the same listener as application clients. The example also shows an OAuth 2.0 configuration for a specific listener, where you specify listener.name. LISTENER-NAME .sasl.enabled.mechanisms instead of sasl.enabled.mechanisms . LISTENER-NAME is the case-insensitive name of the listener. Here, we name the listener CLIENT , so the property name is listener.name.client.sasl.enabled.mechanisms . The example uses OAUTHBEARER authentication. Example: Minimum listener configuration for OAuth 2.0 authentication using a JWKS endpoint sasl.enabled.mechanisms=OAUTHBEARER 1 listeners=CLIENT://0.0.0.0:9092 2 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 3 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER 4 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 5 inter.broker.listener.name=CLIENT 6 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 7 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 8 oauth.valid.issuer.uri="https://<oauth_server_address>" \ 9 oauth.jwks.endpoint.uri="https://<oauth_server_address>/jwks" \ 10 oauth.username.claim="preferred_username" \ 11 oauth.client.id="kafka-broker" \ 12 oauth.client.secret="kafka-secret" \ 13 oauth.token.endpoint.uri="https://<oauth_server_address>/token" ; 14 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 15 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 16 1 Enables the OAUTHBEARER mechanism for credentials exchange over SASL. 2 Configures a listener for client applications to connect to. The system hostname is used as an advertised hostname, which clients must resolve in order to reconnect. The listener is named CLIENT in this example. 3 Specifies the channel protocol for the listener. SASL_SSL is for TLS. SASL_PLAINTEXT is used for an unencrypted connection (no TLS), but there is risk of eavesdropping and interception at the TCP connection layer. 4 Specifies the OAUTHBEARER mechanism for the CLIENT listener. The client name ( CLIENT ) is usually specified in uppercase in the listeners property, in lowercase for listener.name properties ( listener.name.client ), and in lowercase when part of a listener.name. client .* property. 5 Specifies the OAUTHBEARER mechanism for inter-broker communication. 6 Specifies the listener for inter-broker communication. The specification is required for the configuration to be valid. 7 Configures OAuth 2.0 authentication on the client listener. 8 Configures authentication settings for client and inter-broker communication. The oauth.client.id , oauth.client.secret , and auth.token.endpoint.uri properties relate to inter-broker configuration. 9 A valid issuer URI. Only access tokens issued by this issuer will be accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME . 10 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs . 11 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 12 Client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . 13 Secret for the Kafka broker, which is the same for all brokers. 14 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use https:// urls. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token . 15 Enables (and is only required for) OAuth 2.0 authentication for inter-broker communication. 16 (Optional) Enforces session expiry when a token expires, and also activates the Kafka re-authentication mechanism . If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. The following example shows a minimum configuration for a TLS listener, where TLS is used for inter-broker communication. Example: TLS listener configuration for OAuth 2.0 authentication listeners=REPLICATION://kafka:9091,CLIENT://kafka:9092 1 listener.security.protocol.map=REPLICATION:SSL,CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER inter.broker.listener.name=REPLICATION listener.name.replication.ssl.keystore.password=<keystore_password> 3 listener.name.replication.ssl.truststore.password=<truststore_password> listener.name.replication.ssl.keystore.type=JKS listener.name.replication.ssl.truststore.type=JKS listener.name.replication.ssl.secure.random.implementation=SHA1PRNG 4 listener.name.replication.ssl.endpoint.identification.algorithm=HTTPS 5 listener.name.replication.ssl.keystore.location=<path_to_keystore> 6 listener.name.replication.ssl.truststore.location=<path_to_truststore> 7 listener.name.replication.ssl.client.auth=required 8 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 9 oauth.valid.issuer.uri="https://<oauth_server_address>" \ oauth.jwks.endpoint.uri="https://<oauth_server_address>/jwks" \ oauth.username.claim="preferred_username" ; 1 Separate configurations are required for inter-broker communication and client applications. 2 Configures the REPLICATION listener to use TLS, and the CLIENT listener to use SASL over an unencrypted channel. The client could use an encrypted channel ( SASL_SSL ) in a production environment. 3 The ssl. properties define the TLS configuration. 4 Random number generator implementation. If not set, the Java platform SDK default is used. 5 Hostname verification. If set to an empty string, the hostname verification is turned off. If not set, the default value is HTTPS , which enforces hostname verification for server certificates. 6 Path to the keystore for the listener. 7 Path to the truststore for the listener. 8 Specifies that clients of the REPLICATION listener have to authenticate with a client certificate when establishing a TLS connection (used for inter-broker connectivity). 9 Configures the CLIENT listener for OAuth 2.0. Connectivity with the authorization server should use secure HTTPS connections. The following example shows a minimum configuration for OAuth 2.0 authentication using the PLAIN authentication mechanism for credentials exchange over SASL. Fast local token validation is used. Example: Minimum listener configuration for PLAIN authentication listeners=CLIENT://0.0.0.0:9092 1 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN 3 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 4 inter.broker.listener.name=CLIENT 5 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 6 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 7 oauth.valid.issuer.uri="http://<auth_server>/auth/realms/<realm>" \ 8 oauth.jwks.endpoint.uri="https://<auth_server>/auth/realms/<realm>/protocol/openid-connect/certs" \ 9 oauth.username.claim="preferred_username" \ 10 oauth.client.id="kafka-broker" \ 11 oauth.client.secret="kafka-secret" \ 12 oauth.token.endpoint.uri="https://<oauth_server_address>/token" ; 13 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 14 listener.name.client.plain.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.plain.JaasServerOauthOverPlainValidatorCallbackHandler 15 listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ 16 oauth.valid.issuer.uri="https://<oauth_server_address>" \ 17 oauth.jwks.endpoint.uri="https://<oauth_server_address>/jwks" \ 18 oauth.username.claim="preferred_username" \ 19 oauth.token.endpoint.uri="http://<auth_server>/auth/realms/<realm>/protocol/openid-connect/token" ; 20 connections.max.reauth.ms=3600000 21 1 Configures a listener (named CLIENT in this example) for client applications to connect to. The system hostname is used as an advertised hostname, which clients must resolve in order to reconnect. Because this is the only configured listener, it is also used for inter-broker communication. 2 Configures the example CLIENT listener to use SASL over an unencrypted channel. In a production environment, the client should use an encrypted channel ( SASL_SSL ) in order to guard against eavesdropping and interception at the TCP connection layer. 3 Enables the PLAIN authentication mechanism for credentials exchange over SASL as well as OAUTHBEARER . OAUTHBEARER is also specified because it is required for inter-broker communication. Kafka clients can choose which mechanism to use to connect. 4 Specifies the OAUTHBEARER authentication mechanism for inter-broker communication. 5 Specifies the listener (named CLIENT in this example) for inter-broker communication. Required for the configuration to be valid. 6 Configures the server callback handler for the OAUTHBEARER mechanism. 7 Configures authentication settings for client and inter-broker communication using the OAUTHBEARER mechanism. The oauth.client.id , oauth.client.secret , and oauth.token.endpoint.uri properties relate to inter-broker configuration. 8 A valid issuer URI. Only access tokens from this issuer are accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME 9 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs 10 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 11 Client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . 12 Secret for the Kafka broker (the same for all brokers). 13 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use https:// urls. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token 14 Enables OAuth 2.0 authentication for inter-broker communication. 15 Configures the server callback handler for PLAIN authentication. 16 Configures authentication settings for client communication using PLAIN authentication. oauth.token.endpoint.uri is an optional property that enables OAuth 2.0 over PLAIN using the OAuth 2.0 client credentials mechanism . 17 A valid issuer URI. Only access tokens from this issuer are accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME 18 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs 19 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 20 The OAuth 2.0 token endpoint URL to your authorization server. Additional configuration for the PLAIN mechanism. If specified, clients can authenticate over PLAIN by passing an access token as the password using an USDaccessToken: prefix. For production, always use https:// urls. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token . 21 (Optional) Enforces session expiry when a token expires, and also activates the Kafka re-authentication mechanism . If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. 6.5.2.3. Fast local JWT token validation configuration Fast local JWT token validation checks a JWT token signature locally. The local check ensures that a token: Conforms to type by containing a ( typ ) claim value of Bearer for an access token Is valid (not expired) Has an issuer that matches a validIssuerURI You specify a valid issuer URI when you configure the listener, so that any tokens not issued by the authorization server are rejected. The authorization server does not need to be contacted during fast local JWT token validation. You activate fast local JWT token validation by specifying a JWKs endpoint URI exposed by the OAuth 2.0 authorization server. The endpoint contains the public keys used to validate signed JWT tokens, which are sent as credentials by Kafka clients. Note All communication with the authorization server should be performed using HTTPS. For a TLS listener, you can configure a certificate truststore and point to the truststore file. Example properties for fast local JWT token validation listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https://<oauth_server_address>" \ 1 oauth.jwks.endpoint.uri="https://<oauth_server_address>/jwks" \ 2 oauth.jwks.refresh.seconds="300" \ 3 oauth.jwks.refresh.min.pause.seconds="1" \ 4 oauth.jwks.expiry.seconds="360" \ 5 oauth.username.claim="preferred_username" \ 6 oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ 7 oauth.ssl.truststore.password="<truststore_password>" \ 8 oauth.ssl.truststore.type="PKCS12" ; 9 1 A valid issuer URI. Only access tokens issued by this issuer will be accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME . 2 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs . 3 The period between endpoint refreshes (default 300). 4 The minimum pause in seconds between consecutive attempts to refresh JWKS public keys. When an unknown signing key is encountered, the JWKS keys refresh is scheduled outside the regular periodic schedule with at least the specified pause since the last refresh attempt. The refreshing of keys follows the rule of exponential backoff, retrying on unsuccessful refreshes with ever increasing pause, until it reaches oauth.jwks.refresh.seconds . The default value is 1. 5 The duration the JWKs certificates are considered valid before they expire. Default is 360 seconds. If you specify a longer time, consider the risk of allowing access to revoked certificates. 6 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 7 The location of the truststore used in the TLS configuration. 8 Password to access the truststore. 9 The truststore type in PKCS #12 format. 6.5.2.4. OAuth 2.0 introspection endpoint configuration Token validation using an OAuth 2.0 introspection endpoint treats a received access token as opaque. The Kafka broker sends an access token to the introspection endpoint, which responds with the token information necessary for validation. Importantly, it returns up-to-date information if the specific access token is valid, and also information about when the token expires. To configure OAuth 2.0 introspection-based validation, you specify an introspection endpoint URI rather than the JWKs endpoint URI specified for fast local JWT token validation. Depending on the authorization server, you typically have to specify a client ID and client secret , because the introspection endpoint is usually protected. Example properties for an introspection endpoint listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.introspection.endpoint.uri="https://<oauth_server_address>/introspection" \ 1 oauth.client.id="kafka-broker" \ 2 oauth.client.secret="kafka-broker-secret" \ 3 oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ 4 oauth.ssl.truststore.password="<truststore_password>" \ 5 oauth.ssl.truststore.type="PKCS12" \ 6 oauth.username.claim="preferred_username" ; 7 1 The OAuth 2.0 introspection endpoint URI. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token/introspect . 2 Client ID of the Kafka broker. 3 Secret for the Kafka broker. 4 The location of the truststore used in the TLS configuration. 5 Password to access the truststore. 6 The truststore type in PKCS #12 format. 7 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 6.5.3. Session re-authentication for Kafka brokers You can configure OAuth listeners to use Kafka session re-authentication for OAuth 2.0 sessions between Kafka clients and Kafka brokers. This mechanism enforces the expiry of an authenticated session between the client and the broker after a defined period of time. When a session expires, the client immediately starts a new session by reusing the existing connection rather than dropping it. Session re-authentication is disabled by default. You can enable it in the server.properties file. Set the connections.max.reauth.ms property for a TLS listener with OAUTHBEARER or PLAIN enabled as the SASL mechanism. You can specify session re-authentication per listener. For example: Session re-authentication must be supported by the Kafka client libraries used by the client. Session re-authentication can be used with fast local JWT or introspection endpoint token validation. Client re-authentication When the broker's authenticated session expires, the client must re-authenticate to the existing session by sending a new, valid access token to the broker, without dropping the connection. If token validation is successful, a new client session is started using the existing connection. If the client fails to re-authenticate, the broker will close the connection if further attempts are made to send or receive messages. Java clients that use Kafka client library 2.2 or later automatically re-authenticate if the re-authentication mechanism is enabled on the broker. Session re-authentication also applies to refresh tokens, if used. When the session expires, the client refreshes the access token by using its refresh token. The client then uses the new access token to re-authenticate over the existing connection. Session expiry for OAUTHBEARER and PLAIN When session re-authentication is configured, session expiry works differently for OAUTHBEARER and PLAIN authentication. For OAUTHBEARER and PLAIN, using the client ID and secret method: The broker's authenticated session will expire at the configured connections.max.reauth.ms . The session will expire earlier if the access token expires before the configured time. For PLAIN using the long-lived access token method: The broker's authenticated session will expire at the configured connections.max.reauth.ms . Re-authentication will fail if the access token expires before the configured time. Although session re-authentication is attempted, PLAIN has no mechanism for refreshing tokens. If connections.max.reauth.ms is not configured, OAUTHBEARER and PLAIN clients can remain connected to brokers indefinitely, without needing to re-authenticate. Authenticated sessions do not end with access token expiry. However, this can be considered when configuring authorization, for example, by using keycloak authorization or installing a custom authorizer. Additional resources OAuth 2.0 Kafka broker configuration Configuring OAuth 2.0 support for Kafka brokers KIP-368: Allow SASL Connections to Periodically Re-Authenticate 6.5.4. OAuth 2.0 Kafka client configuration A Kafka client is configured with either: The credentials required to obtain a valid access token from an authorization server (client ID and Secret) A valid long-lived access token or refresh token, obtained using tools provided by an authorization server The only information ever sent to the Kafka broker is an access token. The credentials used to authenticate with the authorization server to obtain the access token are never sent to the broker. When a client obtains an access token, no further communication with the authorization server is needed. The simplest mechanism is authentication with a client ID and Secret. Using a long-lived access token, or a long-lived refresh token, adds more complexity because there is an additional dependency on authorization server tools. Note If you are using long-lived access tokens, you may need to configure the client in the authorization server to increase the maximum lifetime of the token. If the Kafka client is not configured with an access token directly, the client exchanges credentials for an access token during Kafka session initiation by contacting the authorization server. The Kafka client exchanges either: Client ID and Secret Client ID, refresh token, and (optionally) a secret Username and password, with client ID and (optionally) a secret 6.5.5. OAuth 2.0 client authentication flows OAuth 2.0 authentication flows depend on the underlying Kafka client and Kafka broker configuration. The flows must also be supported by the authorization server used. The Kafka broker listener configuration determines how clients authenticate using an access token. The client can pass a client ID and secret to request an access token. If a listener is configured to use PLAIN authentication, the client can authenticate with a client ID and secret or username and access token. These values are passed as the username and password properties of the PLAIN mechanism. Listener configuration supports the following token validation options: You can use fast local token validation based on JWT signature checking and local token introspection, without contacting an authorization server. The authorization server provides a JWKS endpoint with public certificates that are used to validate signatures on the tokens. You can use a call to a token introspection endpoint provided by an authorization server. Each time a new Kafka broker connection is established, the broker passes the access token received from the client to the authorization server. The Kafka broker checks the response to confirm whether or not the token is valid. Note An authorization server might only allow the use of opaque access tokens, which means that local token validation is not possible. Kafka client credentials can also be configured for the following types of authentication: Direct local access using a previously generated long-lived access token Contact with the authorization server for a new access token to be issued (using a client ID and a secret, or a refresh token, or a username and a password) 6.5.5.1. Example client authentication flows using the SASL OAUTHBEARER mechanism You can use the following communication flows for Kafka authentication using the SASL OAUTHBEARER mechanism. Client using client ID and secret, with broker delegating validation to authorization server Client using client ID and secret, with broker performing fast local token validation Client using long-lived access token, with broker delegating validation to authorization server Client using long-lived access token, with broker performing fast local validation Client using client ID and secret, with broker delegating validation to authorization server The Kafka client requests an access token from the authorization server using a client ID and secret, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server using its own client ID and secret. A Kafka client session is established if the token is valid. Client using client ID and secret, with broker performing fast local token validation The Kafka client authenticates with the authorization server from the token endpoint, using a client ID and secret, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token locally using a JWT token signature check, and local token introspection. Client using long-lived access token, with broker delegating validation to authorization server The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server, using its own client ID and secret. A Kafka client session is established if the token is valid. Client using long-lived access token, with broker performing fast local validation The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token locally using a JWT token signature check and local token introspection. Warning Fast local JWT token signature validation is suitable only for short-lived tokens as there is no check with the authorization server if a token has been revoked. Token expiration is written into the token, but revocation can happen at any time, so cannot be accounted for without contacting the authorization server. Any issued token would be considered valid until it expires. 6.5.5.2. Example client authentication flows using the SASL PLAIN mechanism You can use the following communication flows for Kafka authentication using the OAuth PLAIN mechanism. Client using a client ID and secret, with the broker obtaining the access token for the client Client using a long-lived access token without a client ID and secret Client using a client ID and secret, with the broker obtaining the access token for the client The Kafka client passes a clientId as a username and a secret as a password. The Kafka broker uses a token endpoint to pass the clientId and secret to the authorization server. The authorization server returns a fresh access token or an error if the client credentials are not valid. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if the token validation is successful. If local token introspection is used, a request is not made to the authorization server. The Kafka broker validates the access token locally using a JWT token signature check. Client using a long-lived access token without a client ID and secret The Kafka client passes a username and password. The password provides the value of an access token that was obtained manually and configured before running the client. The password is passed with or without an USDaccessToken: string prefix depending on whether or not the Kafka broker listener is configured with a token endpoint for authentication. If the token endpoint is configured, the password should be prefixed by USDaccessToken: to let the broker know that the password parameter contains an access token rather than a client secret. The Kafka broker interprets the username as the account username. If the token endpoint is not configured on the Kafka broker listener (enforcing a no-client-credentials mode ), the password should provide the access token without the prefix. The Kafka broker interprets the username as the account username. In this mode, the client doesn't use a client ID and secret, and the password parameter is always interpreted as a raw access token. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if token validation is successful. If local token introspection is used, there is no request made to the authorization server. Kafka broker validates the access token locally using a JWT token signature check. 6.5.6. Configuring OAuth 2.0 authentication OAuth 2.0 is used for interaction between Kafka clients and Streams for Apache Kafka components. In order to use OAuth 2.0 for Streams for Apache Kafka, you must: Configure an OAuth 2.0 authorization server for the Streams for Apache Kafka cluster and Kafka clients Deploy or update the Kafka cluster with Kafka broker listeners configured to use OAuth 2.0 Update your Java-based Kafka clients to use OAuth 2.0 6.5.6.1. Configuring Red Hat Single Sign-On as an OAuth 2.0 authorization server This procedure describes how to deploy Red Hat Single Sign-On as an authorization server and configure it for integration with Streams for Apache Kafka. The authorization server provides a central point for authentication and authorization, and management of users, clients, and permissions. Red Hat Single Sign-On has a concept of realms where a realm represents a separate set of users, clients, permissions, and other configuration. You can use a default master realm , or create a new one. Each realm exposes its own OAuth 2.0 endpoints, which means that application clients and application servers all need to use the same realm. To use OAuth 2.0 with Streams for Apache Kafka, you use a deployment of Red Hat Single Sign-On to create and manage authentication realms. Note If you already have Red Hat Single Sign-On deployed, you can skip the deployment step and use your current deployment. Before you begin You will need to be familiar with using Red Hat Single Sign-On. For installation and administration instructions, see: Server Installation and Configuration Guide Server Administration Guide Prerequisites Streams for Apache Kafka and Kafka are running For the Red Hat Single Sign-On deployment: Check the Red Hat Single Sign-On Supported Configurations Procedure Install Red Hat Single Sign-On. You can install from a ZIP file or by using an RPM. Log in to the Red Hat Single Sign-On Admin Console to create the OAuth 2.0 policies for Streams for Apache Kafka. Login details are provided when you deploy Red Hat Single Sign-On. Create and enable a realm. You can use an existing master realm. Adjust the session and token timeouts for the realm, if required. Create a client called kafka-broker . From the Settings tab, set: Access Type to Confidential Standard Flow Enabled to OFF to disable web login for this client Service Accounts Enabled to ON to allow this client to authenticate in its own name Click Save before continuing. From the Credentials tab, take a note of the secret for using in your Streams for Apache Kafka cluster configuration. Repeat the client creation steps for any application client that will connect to your Kafka brokers. Create a definition for each new client. You will use the names as client IDs in your configuration. What to do After deploying and configuring the authorization server, configure the Kafka brokers to use OAuth 2.0 . 6.5.6.2. Configuring OAuth 2.0 support for Kafka brokers This procedure describes how to configure Kafka brokers so that the broker listeners are enabled to use OAuth 2.0 authentication using an authorization server. We advise use of OAuth 2.0 over an encrypted interface through configuration of TLS listeners. Plain listeners are not recommended. Configure the Kafka brokers using properties that support your chosen authorization server, and the type of authorization you are implementing. Before you start For more information on the configuration and authentication of Kafka broker listeners, see: Listeners OAuth 2.0 authentication mechanisms For a description of the properties used in the listener configuration, see: OAuth 2.0 Kafka broker configuration Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. An OAuth 2.0 authorization server is deployed. Procedure Configure the Kafka broker listener configuration in the server.properties file. For example, using the OAUTHBEARER mechanism: sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler Configure broker connection settings as part of the listener.name.client.oauthbearer.sasl.jaas.config . The examples here show connection configuration options. Example 1: Local token validation using a JWKS endpoint configuration listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https://<oauth_server_address>/auth/realms/<realm_name>" \ oauth.jwks.endpoint.uri="https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/certs" \ oauth.jwks.refresh.seconds="300" \ oauth.jwks.refresh.min.pause.seconds="1" \ oauth.jwks.expiry.seconds="360" \ oauth.username.claim="preferred_username" \ oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ oauth.ssl.truststore.password="<truststore_password>" \ oauth.ssl.truststore.type="PKCS12" ; listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 Example 2: Delegating token validation to the authorization server through the OAuth 2.0 introspection endpoint listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.introspection.endpoint.uri=" https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/introspection " \ # ... If required, configure access to the authorization server. This step is normally required for a production environment, unless a technology like service mesh is used to configure secure channels outside containers. Provide a custom truststore for connecting to a secured authorization server. SSL is always required for access to the authorization server. Set properties to configure the truststore. For example: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.client.id="kafka-broker" \ oauth.client.secret="kafka-broker-secret" \ oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ oauth.ssl.truststore.password="<truststore_password>" \ oauth.ssl.truststore.type="PKCS12" ; If the certificate hostname does not match the access URL hostname, you can turn off certificate hostname validation: oauth.ssl.endpoint.identification.algorithm="" The check ensures that client connection to the authorization server is authentic. You may wish to turn off the validation in a non-production environment. Configure additional properties according to your chosen authentication flow: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.token.endpoint.uri="https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/token" \ 1 oauth.custom.claim.check="@.custom == 'custom-value'" \ 2 oauth.scope="<scope>" \ 3 oauth.check.audience="true" \ 4 oauth.audience="<audience>" \ 5 oauth.valid.issuer.uri="https://https://<oauth_server_address>/auth/<realm_name>" \ 6 oauth.client.id="kafka-broker" \ 7 oauth.client.secret="kafka-broker-secret" \ 8 oauth.connect.timeout.seconds=60 \ 9 oauth.read.timeout.seconds=60 \ 10 oauth.http.retries=2 \ 11 oauth.http.retry.pause.millis=300 \ 12 oauth.groups.claim="USD.groups" \ 13 oauth.groups.claim.delimiter="," \ 14 oauth.include.accept.header="false" ; 15 1 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use https:// urls. Required when KeycloakAuthorizer is used, or an OAuth 2.0 enabled listener is used for inter-broker communication. 2 (Optional) Custom claim checking . A JsonPath filter query that applies additional custom rules to the JWT access token during validation. If the access token does not contain the necessary data, it is rejected. When using the introspection endpoint method, the custom check is applied to the introspection endpoint response JSON. 3 (Optional) A scope parameter passed to the token endpoint. A scope is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 4 (Optional) Audience checking . If your authorization server provides an aud (audience) claim, and you want to enforce an audience check, set ouath.check.audience to true . Audience checks identify the intended recipients of tokens. As a result, the Kafka broker will reject tokens that do not have its clientId in their aud claims. Default is false . 5 (Optional) An audience parameter passed to the token endpoint. An audience is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 6 A valid issuer URI. Only access tokens issued by this issuer will be accepted. (Always required.) 7 The configured client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . Required when an introspection endpoint is used for token validation, or when KeycloakAuthorizer is used. 8 The configured secret for the Kafka broker, which is the same for all brokers. When the broker must authenticate to the authorization server, either a client secret, access token or a refresh token has to be specified. 9 (Optional) The connect timeout in seconds when connecting to the authorization server. The default value is 60. 10 (Optional) The read timeout in seconds when connecting to the authorization server. The default value is 60. 11 The maximum number of times to retry a failed HTTP request to the authorization server. The default value is 0, meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the oauth.connect.timeout.seconds and oauth.read.timeout.seconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. 12 The time to wait before attempting another retry of a failed HTTP request to the authorization server. By default, this time is set to zero, meaning that no pause is applied. This is because many issues that cause failed requests are per-request network glitches or proxy issues that can be resolved quickly. However, if your authorization server is under stress or experiencing high traffic, you may want to set this option to a value of 100 ms or more to reduce the load on the server and increase the likelihood of successful retries. 13 A JsonPath query used to extract groups information from JWT token or introspection endpoint response. Not set by default. This can be used by a custom authorizer to make authorization decisions based on user groups. 14 A delimiter used to parse groups information when returned as a single delimited string. The default value is ',' (comma). 15 (Optional) Sets oauth.include.accept.header to false to remove the Accept header from requests. You can use this setting if including the header is causing issues when communicating with the authorization server. Depending on how you apply OAuth 2.0 authentication, and the type of authorization server being used, add additional configuration settings: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.check.issuer=false \ 1 oauth.fallback.username.claim="<client_id>" \ 2 oauth.fallback.username.prefix="<client_account>" \ 3 oauth.valid.token.type="bearer" \ 4 oauth.userinfo.endpoint.uri="https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/userinfo" ; 5 1 If your authorization server does not provide an iss claim, it is not possible to perform an issuer check. In this situation, set oauth.check.issuer to false and do not specify a oauth.valid.issuer.uri . Default is true . 2 An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID . When a user authenticates using a username and password, to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available. If required, you can use a JsonPath expression like "['client.info'].['client.id']" to retrieve the fallback username from nested JSON attributes within a token. 3 In situations where oauth.fallback.username.claim is applicable, it may also be necessary to prevent name collisions between the values of the username claim, and those of the fallback username claim. Consider a situation where a client called producer exists, but also a regular user called producer exists. In order to differentiate between the two, you can use this property to add a prefix to the user ID of the client. 4 (Only applicable when using oauth.introspection.endpoint.uri ) Depending on the authorization server you are using, the introspection endpoint may or may not return the token type attribute, or it may contain different values. You can specify a valid token type value that the response from the introspection endpoint has to contain. 5 (Only applicable when using oauth.introspection.endpoint.uri ) The authorization server may be configured or implemented in such a way to not provide any identifiable information in an introspection endpoint response. In order to obtain the user ID, you can configure the URI of the userinfo endpoint as a fallback. The oauth.fallback.username.claim , oauth.fallback.username.claim , and oauth.fallback.username.prefix settings are applied to the response of the userinfo endpoint. What to do Configure your Kafka clients to use OAuth 2.0 6.5.6.3. Configuring Kafka Java clients to use OAuth 2.0 Configure Kafka producer and consumer APIs to use OAuth 2.0 for interaction with Kafka brokers. Add a callback plugin to your client pom.xml file, then configure your client for OAuth 2.0. Specify the following in your client configuration: A SASL (Simple Authentication and Security Layer) security protocol: SASL_SSL for authentication over TLS encrypted connections SASL_PLAINTEXT for authentication over unencrypted connections Use SASL_SSL for production and SASL_PLAINTEXT for local development only. When using SASL_SSL , additional ssl.truststore configuration is needed. The truststore configuration is required for secure connection ( https:// ) to the OAuth 2.0 authorization server. To verify the OAuth 2.0 authorization server, add the CA certificate for the authorization server to the truststore in your client configuration. You can configure a truststore in PEM or PKCS #12 format. A Kafka SASL mechanism: OAUTHBEARER for credentials exchange using a bearer token PLAIN to pass client credentials (clientId + secret) or an access token A JAAS (Java Authentication and Authorization Service) module that implements the SASL mechanism: org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule implements the OAuthbearer mechanism org.apache.kafka.common.security.plain.PlainLoginModule implements the plain mechanism To be able to use the OAuthbearer mechanism, you must also add the custom io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler class as the callback handler. JaasClientOauthLoginCallbackHandler handles OAuth callbacks to the authorization server for access tokens during client login. This enables automatic token renewal, ensuring continuous authentication without user intervention. Additionally, it handles login credentials for clients using the OAuth 2.0 password grant method. SASL authentication properties, which support the following authentication methods: OAuth 2.0 client credentials OAuth 2.0 password grant (deprecated) Access token Refresh token Add the SASL authentication properties as JAAS configuration ( sasl.jaas.config and sasl.login.callback.handler.class ). How you configure the authentication properties depends on the authentication method you are using to access the OAuth 2.0 authorization server. In this procedure, the properties are specified in a properties file, then loaded into the client configuration. Note You can also specify authentication properties as environment variables, or as Java system properties. For Java system properties, you can set them using setProperty and pass them on the command line using the -D option. Prerequisites Streams for Apache Kafka and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Add the client library with OAuth 2.0 support to the pom.xml file for the Kafka client: <dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00007</version> </dependency> Configure the client properties by specifying the following configuration in a properties file: The security protocol The SASL mechanism The JAAS module and authentication properties according to the method being used For example, we can add the following to a client.properties file: Client credentials mechanism properties security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ 4 oauth.client.id="<client_id>" \ 5 oauth.client.secret="<client_secret>" \ 6 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ 7 oauth.ssl.truststore.password="USDSTOREPASS" \ 8 oauth.ssl.truststore.type="PKCS12" \ 9 oauth.scope="<scope>" \ 10 oauth.audience="<audience>" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 SASL_SSL security protocol for TLS-encrypted connections. Use SASL_PLAINTEXT over unencrypted connections for local development only. 2 The SASL mechanism specified as OAUTHBEARER or PLAIN . 3 The truststore configuration for secure access to the Kafka cluster. 4 URI of the authorization server token endpoint. 5 Client ID, which is the name used when creating the client in the authorization server. 6 Client secret created when creating the client in the authorization server. 7 The location contains the public key certificate ( truststore.p12 ) for the authorization server. 8 The password for accessing the truststore. 9 The truststore type. 10 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. 11 (Optional) The audience for requesting the token from the token endpoint. An authorization server may require a client to specify the audience. Password grants mechanism properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ 1 oauth.client.secret="<client_secret>" \ 2 oauth.password.grant.username="<username>" \ 3 oauth.password.grant.password="<password>" \ 4 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.scope="<scope>" \ oauth.audience="<audience>" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Username for password grant authentication. OAuth password grant configuration (username and password) uses the OAuth 2.0 password grant method. To use password grants, create a user account for a client on your authorization server with limited permissions. The account should act like a service account. Use in environments where user accounts are required for authentication, but consider using a refresh token first. 4 Password for password grant authentication. Note SASL PLAIN does not support passing a username and password (password grants) using the OAuth 2.0 password grant method. Access token properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.access.token="<access_token>" \ 1 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Long-lived access token for Kafka clients. Refresh token properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ 1 oauth.client.secret="<client_secret>" \ 2 oauth.refresh.token="<refresh_token>" \ 3 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Long-lived refresh token for Kafka clients. Input the client properties for OAUTH 2.0 authentication into the Java client code. Example showing input of client properties Properties props = new Properties(); try (FileReader reader = new FileReader("client.properties", StandardCharsets.UTF_8)) { props.load(reader); } Verify that the Kafka client can access the Kafka brokers. 6.6. Using OAuth 2.0 token-based authorization If you are using OAuth 2.0 with Red Hat Single Sign-On for token-based authentication, you can also use Red Hat Single Sign-On to configure authorization rules to constrain client access to Kafka brokers. Authentication establishes the identity of a user. Authorization decides the level of access for that user. Streams for Apache Kafka supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services , which allows you to manage security policies and permissions centrally. Security policies and permissions defined in Red Hat Single Sign-On are used to grant access to resources on Kafka brokers. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. Kafka allows all users full access to brokers by default, and also provides the AclAuthorizer and StandardAuthorizer plugins to configure authorization based on Access Control Lists (ACLs). The ACL rules managed by these plugins are used to grant or deny access to resources based on the username , and these rules are stored within the Kafka cluster itself. However, OAuth 2.0 token-based authorization with Red Hat Single Sign-On offers far greater flexibility on how you wish to implement access control to Kafka brokers. In addition, you can configure your Kafka brokers to use OAuth 2.0 authorization and ACLs. Additional resources Using OAuth 2.0 token-based authentication Kafka Authorization Red Hat Single Sign-On documentation 6.6.1. OAuth 2.0 authorization mechanism OAuth 2.0 authorization in Streams for Apache Kafka uses Red Hat Single Sign-On server Authorization Services REST endpoints to extend token-based authentication with Red Hat Single Sign-On by applying defined security policies on a particular user, and providing a list of permissions granted on different resources for that user. Policies use roles and groups to match permissions to users. OAuth 2.0 authorization enforces permissions locally based on the received list of grants for the user from Red Hat Single Sign-On Authorization Services. 6.6.1.1. Kafka broker custom authorizer A Red Hat Single Sign-On authorizer ( KeycloakAuthorizer ) is provided with Streams for Apache Kafka. To be able to use the Red Hat Single Sign-On REST endpoints for Authorization Services provided by Red Hat Single Sign-On, you configure a custom authorizer on the Kafka broker. The authorizer fetches a list of granted permissions from the authorization server as needed, and enforces authorization locally on the Kafka Broker, making rapid authorization decisions for each client request. 6.6.2. Configuring OAuth 2.0 authorization support This procedure describes how to configure Kafka brokers to use OAuth 2.0 authorization using Red Hat Single Sign-On Authorization Services. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of Red Hat Single Sign-On groups , roles , clients , and users to configure access in Red Hat Single Sign-On. Typically, groups are used to match users based on organizational departments or geographical locations. And roles are used to match users based on their function. With Red Hat Single Sign-On, you can store users and groups in LDAP, whereas clients and roles cannot be stored this way. Storage and access to user data may be a factor in how you choose to configure authorization policies. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites Streams for Apache Kafka must be configured to use OAuth 2.0 with Red Hat Single Sign-On for token-based authentication . You use the same Red Hat Single Sign-On server endpoint when you set up authorization. You need to understand how to manage policies and permissions for Red Hat Single Sign-On Authorization Services, as described in the Red Hat Single Sign-On documentation . Procedure Access the Red Hat Single Sign-On Admin Console or use the Red Hat Single Sign-On Admin CLI to enable Authorization Services for the Kafka broker client you created when setting up OAuth 2.0 authentication. Use Authorization Services to define resources, authorization scopes, policies, and permissions for the client. Bind the permissions to users and clients by assigning them roles and groups. Configure the Kafka brokers to use Red Hat Single Sign-On authorization. Add the following to the Kafka server.properties configuration file to install the authorizer in Kafka: authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakAuthorizer principal.builder.class=io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder Add configuration for the Kafka brokers to access the authorization server and Authorization Services. Here we show example configuration added as additional properties to server.properties , but you can also define them as environment variables using capitalized or upper-case naming conventions. strimzi.authorization.token.endpoint.uri="https://<auth_server_address>/auth/realms/REALM-NAME/protocol/openid-connect/token" 1 strimzi.authorization.client.id="kafka" 2 1 The OAuth 2.0 token endpoint URL to Red Hat Single Sign-On. For production, always use https:// urls. 2 The client ID of the OAuth 2.0 client definition in Red Hat Single Sign-On that has Authorization Services enabled. Typically, kafka is used as the ID. (Optional) Add configuration for specific Kafka clusters. For example: strimzi.authorization.kafka.cluster.name="kafka-cluster" 1 1 The name of a specific Kafka cluster. Names are used to target permissions, making it possible to manage multiple clusters within the same Red Hat Single Sign-On realm. The default value is kafka-cluster . (Optional) Delegate to simple authorization: strimzi.authorization.delegate.to.kafka.acl="true" 1 1 Delegate authorization to Kafka AclAuthorizer if access is denied by Red Hat Single Sign-On Authorization Services policies. The default is false . (Optional) Add configuration for TLS connection to the authorization server. For example: strimzi.authorization.ssl.truststore.location=<path_to_truststore> 1 strimzi.authorization.ssl.truststore.password=<my_truststore_password> 2 strimzi.authorization.ssl.truststore.type=JKS 3 strimzi.authorization.ssl.secure.random.implementation=SHA1PRNG 4 strimzi.authorization.ssl.endpoint.identification.algorithm=HTTPS 5 1 The path to the truststore that contain the certificates. 2 The password for the truststore. 3 The truststore type. If not set, the default Java keystore type is used. 4 Random number generator implementation. If not set, the Java platform SDK default is used. 5 Hostname verification. If set to an empty string, the hostname verification is turned off. If not set, the default value is HTTPS , which enforces hostname verification for server certificates. (Optional) Configure the refresh of grants from the authorization server. The grants refresh job works by enumerating the active tokens and requesting the latest grants for each. For example: strimzi.authorization.grants.refresh.period.seconds="120" 1 strimzi.authorization.grants.refresh.pool.size="10" 2 strimzi.authorization.grants.max.idle.time.seconds="300" 3 strimzi.authorization.grants.gc.period.seconds="300" 4 strimzi.authorization.reuse.grants="false" 5 1 Specifies how often the list of grants from the authorization server is refreshed (once per minute by default). To turn grants refresh off for debugging purposes, set to "0" . 2 Specifies the size of the thread pool (the degree of parallelism) used by the grants refresh job. The default value is "5" . 3 The time, in seconds, after which an idle grant in the cache can be evicted. The default value is 300. 4 The time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300. 5 Controls whether the latest grants are fetched for a new session. When disabled, grants are retrieved from Red Hat Single Sign-On and cached for the user. The default value is true . (Optional) Configure network timeouts when communicating with the authorization server. For example: strimzi.authorization.connect.timeout.seconds="60" 1 strimzi.authorization.read.timeout.seconds="60" 2 strimzi.authorization.http.retries="2" 3 1 The connect timeout in seconds when connecting to the Red Hat Single Sign-On token endpoint. The default value is 60 . 2 The read timeout in seconds when connecting to the Red Hat Single Sign-On token endpoint. The default value is 60 . 3 The maximum number of times to retry (without pausing) a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the strimzi.authorization.connect.timeout.seconds and strimzi.authorization.read.timeout.seconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. (Optional) Enable OAuth 2.0 metrics for token validation and authorization: oauth.enable.metrics="true" 1 1 Controls whether to enable or disable OAuth metrics. The default value is false . (Optional) Remove the Accept header from requests: oauth.include.accept.header="false" 1 1 Set to false if including the header is causing issues when communicating with the authorization server. The default value is true . Verify the configured permissions by accessing Kafka brokers as clients or users with specific roles, making sure they have the necessary access, or do not have the access they are not supposed to have. 6.7. Using OPA policy-based authorization Open Policy Agent (OPA) is an open-source policy engine. You can integrate OPA with Streams for Apache Kafka to act as a policy-based authorization mechanism for permitting client operations on Kafka brokers. When a request is made from a client, OPA will evaluate the request against policies defined for Kafka access, then allow or deny the request. Note Red Hat does not support the OPA server. Additional resources Open Policy Agent website 6.7.1. Defining OPA policies Before integrating OPA with Streams for Apache Kafka, consider how you will define policies to provide fine-grained access controls. You can define access control for Kafka clusters, consumer groups and topics. For instance, you can define an authorization policy that allows write access from a producer client to a specific broker topic. For this, the policy might specify the: User principal and host address associated with the producer client Operations allowed for the client Resource type ( topic ) and resource name the policy applies to Allow and deny decisions are written into the policy, and a response is provided based on the request and client identification data provided. In our example the producer client would have to satisfy the policy to be allowed to write to the topic. 6.7.2. Connecting to the OPA To enable Kafka to access the OPA policy engine to query access control policies, , you configure a custom OPA authorizer plugin ( kafka-authorizer-opa- VERSION .jar ) in your Kafka server.properties file. When a request is made by a client, the OPA policy engine is queried by the plugin using a specified URL address and a REST endpoint, which must be the name of the defined policy. The plugin provides the details of the client request - user principal, operation, and resource - in JSON format to be checked against the policy. The details will include the unique identity of the client; for example, taking the distinguished name from the client certificate if TLS authentication is used. OPA uses the data to provide a response - either true or false - to the plugin to allow or deny the request. 6.7.3. Configuring OPA authorization support This procedure describes how to configure Kafka brokers to use OPA authorization. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of users and Kafka resources to define OPA policies. It is possible to set up OPA to load user information from an LDAP data source. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. An OPA server must be available for connection. The OPA authorizer plugin for Kafka . Procedure Write the OPA policies required for authorizing client requests to perform operations on the Kafka brokers. See Defining OPA policies . Now configure the Kafka brokers to use OPA. Install the OPA authorizer plugin for Kafka . See Connecting to the OPA . Make sure that the plugin files are included in the Kafka classpath. Add the following to the Kafka server.properties configuration file to enable the OPA plugin: authorizer.class.name: com.bisnode.kafka.authorization.OpaAuthorizer Add further configuration to server.properties for the Kafka brokers to access the OPA policy engine and policies. For example: opa.authorizer.url=https:// OPA-ADDRESS /allow 1 opa.authorizer.allow.on.error=false 2 opa.authorizer.cache.initial.capacity=50000 3 opa.authorizer.cache.maximum.size=50000 4 opa.authorizer.cache.expire.after.seconds=600000 5 super.users=User:alice;User:bob 6 1 (Required) The OAuth 2.0 token endpoint URL for the policy the authorizer plugin will query. In this example, the policy is called allow . 2 Flag to specify whether a client is allowed or denied access by default if the authorizer plugin fails to connect with the OPA policy engine. 3 Initial capacity in bytes of the local cache. The cache is used so that the plugin does not have to query the OPA policy engine for every request. 4 Maximum capacity in bytes of the local cache. 5 Time in milliseconds that the local cache is refreshed by reloading from the OPA policy engine. 6 A list of user principals treated as super users, so that they are always allowed without querying the Open Policy Agent policy. Refer to the Open Policy Agent website for information on authentication and authorization options. Verify the configured permissions by accessing Kafka brokers using clients that have and do not have the correct authorization. | [
"listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094",
"listener.security.protocol.map=INT1:SASL_PLAINTEXT,INT2:SASL_SSL,REPLICATION:SSL",
"listener.security.protocol.map=INT1:SSL,INT2:SSL,REPLICATION:SSL",
"ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456",
"listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094 listener.security.protocol.map=INT1:SSL,INT2:SSL,REPLICATION:SSL Default configuration - will be used for listeners INT1 and INT2 ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456 Different configuration for listener REPLICATION listener.name.replication.ssl.keystore.location=/path/to/keystore/replication.jks listener.name.replication.ssl.keystore.password=123456",
"listeners=UNENCRYPTED://:9092,ENCRYPTED://:9093,REPLICATION://:9094 listener.security.protocol.map=UNENCRYPTED:PLAINTEXT,ENCRYPTED:SSL,REPLICATION:PLAINTEXT ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456",
"ssl.truststore.location=/path/to/truststore.jks ssl.truststore.password=123456 ssl.client.auth=required",
"KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; };",
"listeners=INSECURE://:9092,AUTHENTICATED://:9093,REPLICATION://:9094 listener.security.protocol.map=INSECURE:PLAINTEXT,AUTHENTICATED:SASL_PLAINTEXT,REPLICATION:PLAINTEXT sasl.enabled.mechanisms=PLAIN",
"su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties",
"KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required; };",
"listeners=INSECURE://:9092,AUTHENTICATED://:9093,REPLICATION://:9094 listener.security.protocol.map=INSECURE:PLAINTEXT,AUTHENTICATED:SASL_PLAINTEXT,REPLICATION:PLAINTEXT sasl.enabled.mechanisms=SCRAM-SHA-512",
"su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties",
"KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/etc/security/keytabs/kafka_server.keytab\" principal=\"kafka/[email protected]\"; org.apache.kafka.common.security.scram.ScramLoginModule required; };",
"sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512",
"bin/kafka-storage.sh format --config /opt/kafka/config/kraft/server.properties --cluster-id 1 --release-version 3.7 --add-scram 'SCRAM-SHA-512=[name=kafka, password=changeit]' --ignore formatted",
"sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512",
"KafkaServer { org.apache.kafka.common.security.plain.ScramLoginModule required username=\"kafka\" password=\"changeit\" # };",
"/opt/kafka/kafka-configs.sh --bootstrap-server <broker_host>:<port> --alter --add-config 'SCRAM-SHA-512=[password=<password>]' --entity-type users --entity-name <username>",
"/opt/kafka/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name user1",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name <username>",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name user1",
"/opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab",
"chown kafka:kafka -R /opt/kafka/krb5",
"KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; };",
"broker.id=0 listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5",
"su - kafka export KAFKA_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties",
"sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \\ 4 useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/producer1.keytab\" principal=\"producer1/[email protected]\";",
"sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/consumer1.keytab\" principal=\"consumer1/[email protected]\";",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094",
"authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer",
"super.users=User:admin,User:operator",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic myTopic Current ACLs for resource `Topic:myTopic`: User:user1 has Allow permission for operations: Read from hosts: * User:user2 has Allow permission for operations: Read from hosts: * User:user2 has Deny permission for operations: Read from hosts: 127.0.0.1 User:user1 has Allow permission for operations: Describe from hosts: * User:user2 has Allow permission for operations: Describe from hosts: * User:user2 has Deny permission for operations: Describe from hosts: 127.0.0.1",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1",
"listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER",
"listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN",
"sasl.enabled.mechanisms=OAUTHBEARER 1 listeners=CLIENT://0.0.0.0:9092 2 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 3 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER 4 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 5 inter.broker.listener.name=CLIENT 6 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 7 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 8 oauth.valid.issuer.uri=\"https://<oauth_server_address>\" \\ 9 oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/jwks\" \\ 10 oauth.username.claim=\"preferred_username\" \\ 11 oauth.client.id=\"kafka-broker\" \\ 12 oauth.client.secret=\"kafka-secret\" \\ 13 oauth.token.endpoint.uri=\"https://<oauth_server_address>/token\" ; 14 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 15 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 16",
"listeners=REPLICATION://kafka:9091,CLIENT://kafka:9092 1 listener.security.protocol.map=REPLICATION:SSL,CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER inter.broker.listener.name=REPLICATION listener.name.replication.ssl.keystore.password=<keystore_password> 3 listener.name.replication.ssl.truststore.password=<truststore_password> listener.name.replication.ssl.keystore.type=JKS listener.name.replication.ssl.truststore.type=JKS listener.name.replication.ssl.secure.random.implementation=SHA1PRNG 4 listener.name.replication.ssl.endpoint.identification.algorithm=HTTPS 5 listener.name.replication.ssl.keystore.location=<path_to_keystore> 6 listener.name.replication.ssl.truststore.location=<path_to_truststore> 7 listener.name.replication.ssl.client.auth=required 8 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 9 oauth.valid.issuer.uri=\"https://<oauth_server_address>\" oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/jwks\" oauth.username.claim=\"preferred_username\" ;",
"listeners=CLIENT://0.0.0.0:9092 1 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN 3 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 4 inter.broker.listener.name=CLIENT 5 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 6 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 7 oauth.valid.issuer.uri=\"http://<auth_server>/auth/realms/<realm>\" \\ 8 oauth.jwks.endpoint.uri=\"https://<auth_server>/auth/realms/<realm>/protocol/openid-connect/certs\" \\ 9 oauth.username.claim=\"preferred_username\" \\ 10 oauth.client.id=\"kafka-broker\" \\ 11 oauth.client.secret=\"kafka-secret\" \\ 12 oauth.token.endpoint.uri=\"https://<oauth_server_address>/token\" ; 13 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 14 listener.name.client.plain.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.plain.JaasServerOauthOverPlainValidatorCallbackHandler 15 listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \\ 16 oauth.valid.issuer.uri=\"https://<oauth_server_address>\" \\ 17 oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/jwks\" \\ 18 oauth.username.claim=\"preferred_username\" \\ 19 oauth.token.endpoint.uri=\"http://<auth_server>/auth/realms/<realm>/protocol/openid-connect/token\" ; 20 connections.max.reauth.ms=3600000 21",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https://<oauth_server_address>\" \\ 1 oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/jwks\" \\ 2 oauth.jwks.refresh.seconds=\"300\" \\ 3 oauth.jwks.refresh.min.pause.seconds=\"1\" \\ 4 oauth.jwks.expiry.seconds=\"360\" \\ 5 oauth.username.claim=\"preferred_username\" \\ 6 oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" \\ 7 oauth.ssl.truststore.password=\"<truststore_password>\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" ; 9",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.introspection.endpoint.uri=\"https://<oauth_server_address>/introspection\" \\ 1 oauth.client.id=\"kafka-broker\" \\ 2 oauth.client.secret=\"kafka-broker-secret\" \\ 3 oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" \\ 4 oauth.ssl.truststore.password=\"<truststore_password>\" \\ 5 oauth.ssl.truststore.type=\"PKCS12\" \\ 6 oauth.username.claim=\"preferred_username\" ; 7",
"listener.name.client.oauthbearer.connections.max.reauth.ms=3600000",
"sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https://<oauth_server_address>/auth/realms/<realm_name>\" oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/certs\" oauth.jwks.refresh.seconds=\"300\" oauth.jwks.refresh.min.pause.seconds=\"1\" oauth.jwks.expiry.seconds=\"360\" oauth.username.claim=\"preferred_username\" oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" oauth.ssl.truststore.password=\"<truststore_password>\" oauth.ssl.truststore.type=\"PKCS12\" ; listener.name.client.oauthbearer.connections.max.reauth.ms=3600000",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.introspection.endpoint.uri=\" https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/introspection \" #",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.client.id=\"kafka-broker\" oauth.client.secret=\"kafka-broker-secret\" oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" oauth.ssl.truststore.password=\"<truststore_password>\" oauth.ssl.truststore.type=\"PKCS12\" ;",
"oauth.ssl.endpoint.identification.algorithm=\"\"",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.token.endpoint.uri=\"https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/token\" \\ 1 oauth.custom.claim.check=\"@.custom == 'custom-value'\" \\ 2 oauth.scope=\"<scope>\" \\ 3 oauth.check.audience=\"true\" \\ 4 oauth.audience=\"<audience>\" \\ 5 oauth.valid.issuer.uri=\"https://https://<oauth_server_address>/auth/<realm_name>\" \\ 6 oauth.client.id=\"kafka-broker\" \\ 7 oauth.client.secret=\"kafka-broker-secret\" \\ 8 oauth.connect.timeout.seconds=60 \\ 9 oauth.read.timeout.seconds=60 \\ 10 oauth.http.retries=2 \\ 11 oauth.http.retry.pause.millis=300 \\ 12 oauth.groups.claim=\"USD.groups\" \\ 13 oauth.groups.claim.delimiter=\",\" \\ 14 oauth.include.accept.header=\"false\" ; 15",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.check.issuer=false \\ 1 oauth.fallback.username.claim=\"<client_id>\" \\ 2 oauth.fallback.username.prefix=\"<client_account>\" \\ 3 oauth.valid.token.type=\"bearer\" \\ 4 oauth.userinfo.endpoint.uri=\"https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/userinfo\" ; 5",
"<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00007</version> </dependency>",
"security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" \\ 4 oauth.client.id=\"<client_id>\" \\ 5 oauth.client.secret=\"<client_secret>\" \\ 6 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" \\ 7 oauth.ssl.truststore.password=\"USDSTOREPASS\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" \\ 9 oauth.scope=\"<scope>\" \\ 10 oauth.audience=\"<audience>\" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.password.grant.username=\"<username>\" \\ 3 oauth.password.grant.password=\"<password>\" \\ 4 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.access.token=\"<access_token>\" \\ 1 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.refresh.token=\"<refresh_token>\" \\ 3 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"Properties props = new Properties(); try (FileReader reader = new FileReader(\"client.properties\", StandardCharsets.UTF_8)) { props.load(reader); }",
"authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakAuthorizer principal.builder.class=io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder",
"strimzi.authorization.token.endpoint.uri=\"https://<auth_server_address>/auth/realms/REALM-NAME/protocol/openid-connect/token\" 1 strimzi.authorization.client.id=\"kafka\" 2",
"strimzi.authorization.kafka.cluster.name=\"kafka-cluster\" 1",
"strimzi.authorization.delegate.to.kafka.acl=\"true\" 1",
"strimzi.authorization.ssl.truststore.location=<path_to_truststore> 1 strimzi.authorization.ssl.truststore.password=<my_truststore_password> 2 strimzi.authorization.ssl.truststore.type=JKS 3 strimzi.authorization.ssl.secure.random.implementation=SHA1PRNG 4 strimzi.authorization.ssl.endpoint.identification.algorithm=HTTPS 5",
"strimzi.authorization.grants.refresh.period.seconds=\"120\" 1 strimzi.authorization.grants.refresh.pool.size=\"10\" 2 strimzi.authorization.grants.max.idle.time.seconds=\"300\" 3 strimzi.authorization.grants.gc.period.seconds=\"300\" 4 strimzi.authorization.reuse.grants=\"false\" 5",
"strimzi.authorization.connect.timeout.seconds=\"60\" 1 strimzi.authorization.read.timeout.seconds=\"60\" 2 strimzi.authorization.http.retries=\"2\" 3",
"oauth.enable.metrics=\"true\" 1",
"oauth.include.accept.header=\"false\" 1",
"authorizer.class.name: com.bisnode.kafka.authorization.OpaAuthorizer",
"opa.authorizer.url=https:// OPA-ADDRESS /allow 1 opa.authorizer.allow.on.error=false 2 opa.authorizer.cache.initial.capacity=50000 3 opa.authorizer.cache.maximum.size=50000 4 opa.authorizer.cache.expire.after.seconds=600000 5 super.users=User:alice;User:bob 6"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-securing-kafka-str |
Chapter 1. About Lightspeed | Chapter 1. About Lightspeed 1.1. OpenShift Lightspeed overview Red Hat OpenShift Lightspeed provides intelligent, natural language processing capabilities designed to make Red Hat cloud-native application platforms easier to use for beginners and more efficient for experienced professionals. Note Because OpenShift Lightspeed releases on a different cadence from OpenShift Container Platform, the OpenShift Lightspeed documentation is available as a separate documentation set at About Red Hat OpenShift Lightspeed . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/lightspeed/about-lightspeed |
Chapter 6. Changes in Eclipse Vert.x authentication and authorization | Chapter 6. Changes in Eclipse Vert.x authentication and authorization The following sections describe the changes in Eclipse Vert.x authentication and authorization. The Eclipse Vert.x authentication module has major updates in Eclipse Vert.x 4. The io.vertx.ext.auth.AuthProvider interface has been split into two new interfaces: io.vertx.ext.auth.authentication.AuthenticationProvider Important Authentication feature is provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. io.vertx.ext.auth.authorization.AuthorizationProvider This update enables any provider to independently perform either authentication and authorization. 6.1. Migrating the authentication applications The authentication mechanism has changed at the result level. In earlier releases, the result was a User object, which was provider specific. In Eclipse Vert.x 4, the result is a common implementation of io.vertx.ext.auth.User . The following example shows how a user was authenticated in Eclipse Vert.x 3.x releases. JsonObject authInfo = new JsonObject() .put("username", "john") .put("password", "superUSDecret"); // omitting the error handling for brevity provider.authenticate(authInfo, res -> { if (res.succeeded()) { // may require type casting for example on Oauth2 User user = res.result(); } }); The following example shows how to authenticate a user in Eclipse Vert.x 4. JsonObject authInfo = new JsonObject() .put("username", "john") .put("password", "superUSDecret"); // omitting the error handling for brevity provider.authenticate(authInfo, res -> { if (res.succeeded()) { // Never needs type casting User user = res.result(); } }); 6.2. Migrating the authorization applications Authorization is a new feature in Eclipse Vert.x 4. In earlier releases, you could only check if a user was authorized to perform the tasks on the User object. This meant that the provider was responsible for both authentication and authorization of the user. In Eclipse Vert.x 4, the User object instances are not associated with a particular authentication provider. So you can authenticate and authorize a user using different providers. For example, you can authenticate a user using OAuth2 and perform authorization checks against MongoDB or SQL database. The following example shows how an application checks if a user can use Printer #1234 in Eclipse Vert.x 3.x releases. // omitting the error handling for brevity user.isAuthorized("printers:printer1234", res -> { if (res.succeeded()) { boolean hasAuthority = res.result(); if (hasAuthority) { System.out.println("User can use the printer"); } else { System.out.println("User cannot use the printer"); } } }); This authorization worked for JDBC and MongoDB. However it did not work for providers such as OAuth2, because the provider did not perform authorization checks. From Eclipse Vert.x 4, it is possible to perform such authorization checks by using different providers. // omitting the error handling for brevity provider.getAuthorizations(user, res -> { if (res.succeeded()) { if (PermissionBasedAuthorization.create("printer1234").match(user)) { System.out.println("User can use the printer"); } else { System.out.println("User cannot use the printer"); } } }); You can check authorizations on roles, permissions, logic operations, wildcards and any other implementation you add. 6.3. Changes in key management In Eclipse Vert.x 4, there are major updates in handling keys. The most important change is that when a key loads, there is no distinction between public buffer and private buffer. The following classes have been updated: io.vertx.ext.auth.KeyStoreOptions used to work with jce keystores io.vertx.ext.auth.SecretOptions used to handle symmetric secrets io.vertx.ext.auth.PubSecKeyOptions used to handle public secret keys The following section describes the changes in key management. 6.3.1. Secret options class is no longer available The SecretOptions class is no longer available. Use the new PubSecKeyOptions class instead to work with a cryptographic key. The following example shows how methods of SecretOptions class were used in Eclipse Vert.x 3.x releases. new SecretOptions() .setType("HS256") .setSecret("password") The following example shows how methods of PubSecKeyOptions class should be used in Eclipse Vert.x 4. new PubSecKeyOptions() .setAlgorithm("HS256") .setSecretKey("password") 6.3.2. Updates in public secret keys management In Eclipse Vert.x 3.x, the configuration object in public secret key management assumed that: Keys are configured as key-pairs. Key data is a PKCS8 encoded string without standard delimiters. The following example shows how to configure key pair in Eclipse Vert.x 3.x. new PubSecKeyOptions() .setPublicKey( // remove the PEM boundaries pubPemString .replaceAll("-----BEGIN PUBLIC KEY----") .replaceAll("-----END PUBLIC KEY----")) .setSecretKey( // remove the PEM boundaries secPemString .replaceAll("-----BEGIN PUBLIC KEY----") .replaceAll("-----END PUBLIC KEY----")); In Eclipse Vert.x 4, you must specify both the public and private key. The following example shows how to configure key pair in Eclipse Vert.x 4. PubSecKeyOptions pubKey = new PubSecKeyOptions() // the buffer is the exact contents of the PEM file and had boundaries included in it .setBuffer(pubPemString); PubSecKeyOptions secKey = new PubSecKeyOptions() // the buffer is the exact contents of the PEM file and had boundaries included in it .setBuffer(secPemString); You can now handle X509 certificates using PubSecKeyOptions . PubSecKeyOptions x509Certificate = new PubSecKeyOptions() // the buffer is the exact contents of the PEM file and had boundaries included in it .setBuffer(x509PemString); 6.3.3. Changes in keystore management In Eclipse Vert.x 3.x, KeyStoreOptions assumes that the keystore format is jceks , and the stored password is the same as the password of the key. As jceks is a proprietary format, it is recommended to use a standard format, such as JDK, instead. When you use KeyStoreOptions in Eclipse Vert.x 4, you can specify a store type. For example, store types such as PKCS11, PKCS12, and so on can be set. The default store type is jceks . In Eclipse Vert.x 3.x, all keystore entries would share the same password, that is, the keystore password. In Eclipse Vert.x 4, each keystore entry can have a dedicated password. If you do not want to set password for each keystore entry, you can configure the keystore password as the default password for all entries. The following example shows how to load a jceks keystore in Eclipse Vert.x 3.x. new KeyStoreOptions() .setPath("path/to/keystore.jks") .setPassword("keystore-password"); In Eclipse Vert.x 4, the default format is assumed to be the default format configured by JDK. The format is PKCS12 in Java 9 and above. The following example shows how to load a jceks keystore in Eclipse Vert.x 4. new KeyStoreOptions() .setPath("path/to/keystore.jks") // Modern JDKs use `jceks` keystore. But this type is not the default // If the type is not set to `jceks` then probably `pkcs12` will be used .setType("jceks") .setPassword("keystore-password") // optionally if your keys have different passwords // and if a key specific id is not provided it defaults to // the keystore password .putPasswordProtection("key-id", "key-specific-password"); 6.4. Deprecated and removed authentication and authorization methods The following sections list methods deprecated and removed for authentication and authorization. 6.4.1. List of removed authentication and authorization methods The following methods have been removed: Removed methods Replacing methods OAuth2Auth.createKeycloak() KeycloakAuth.create(vertx, JsonObject) () OAuth2Auth.create(Vertx, OAuth2FlowType, OAuth2ClientOptions)() OAuth2Auth.create(vertx, new OAuth2ClientOptions().setFlow(YOUR_DESIRED_FLOW)) OAuth2Auth.create(Vertx, OAuth2FlowType) OAuth2Auth.create(vertx, new OAuth2ClientOptions().setFlow(YOUR_DESIRED_FLOW)) User.isAuthorised() User.isAuthorized() User.setAuthProvider() No replacing method AccessToken.refreshToken() AccessToken.opaqueRefreshToken() io.vertx.ext.auth.jwt.JWTOptions data object io.vertx.ext.jwt.JWTOptions data object Oauth2ClientOptions.isUseAuthorizationHeader() No replacing method Oauth2ClientOptions.scopeSeparator() No replacing method 6.4.2. List of deprecated authentication and authorization methods The following methods have been deprecated: Deprecated methods Replacing methods OAuth2Auth.decodeToken() AuthProvider.authenticate() OAuth2Auth.introspectToken() AuthProvider.authenticate() OAuth2Auth.getFlowType() No replacing method OAuth2Auth.loadJWK() OAuth2Auth.jwkSet() Oauth2ClientOptions.isUseAuthorizationHeader() No replacing method 6.4.3. List of deprecated authentication and authorization classes The following classes have been deprecated: Deprecated class Replacing class AbstractUser Create user objects using the ` User.create(JsonObject)` method. AuthOptions No replacing class JDBCAuthOptions JDBCAuthenticationOptions for authentication and JDBCAuthorizationOptions for authorization JDBCHashStrategy No replacing class OAuth2RBAC AuthorizationProvider Oauth2Response Recommended to use WebClient class KeycloakHelper No replacing class | [
"JsonObject authInfo = new JsonObject() .put(\"username\", \"john\") .put(\"password\", \"superUSDecret\"); // omitting the error handling for brevity provider.authenticate(authInfo, res -> { if (res.succeeded()) { // may require type casting for example on Oauth2 User user = res.result(); } });",
"JsonObject authInfo = new JsonObject() .put(\"username\", \"john\") .put(\"password\", \"superUSDecret\"); // omitting the error handling for brevity provider.authenticate(authInfo, res -> { if (res.succeeded()) { // Never needs type casting User user = res.result(); } });",
"// omitting the error handling for brevity user.isAuthorized(\"printers:printer1234\", res -> { if (res.succeeded()) { boolean hasAuthority = res.result(); if (hasAuthority) { System.out.println(\"User can use the printer\"); } else { System.out.println(\"User cannot use the printer\"); } } });",
"// omitting the error handling for brevity provider.getAuthorizations(user, res -> { if (res.succeeded()) { if (PermissionBasedAuthorization.create(\"printer1234\").match(user)) { System.out.println(\"User can use the printer\"); } else { System.out.println(\"User cannot use the printer\"); } } });",
"new SecretOptions() .setType(\"HS256\") .setSecret(\"password\")",
"new PubSecKeyOptions() .setAlgorithm(\"HS256\") .setSecretKey(\"password\")",
"new PubSecKeyOptions() .setPublicKey( // remove the PEM boundaries pubPemString .replaceAll(\"-----BEGIN PUBLIC KEY----\") .replaceAll(\"-----END PUBLIC KEY----\")) .setSecretKey( // remove the PEM boundaries secPemString .replaceAll(\"-----BEGIN PUBLIC KEY----\") .replaceAll(\"-----END PUBLIC KEY----\"));",
"PubSecKeyOptions pubKey = new PubSecKeyOptions() // the buffer is the exact contents of the PEM file and had boundaries included in it .setBuffer(pubPemString); PubSecKeyOptions secKey = new PubSecKeyOptions() // the buffer is the exact contents of the PEM file and had boundaries included in it .setBuffer(secPemString);",
"PubSecKeyOptions x509Certificate = new PubSecKeyOptions() // the buffer is the exact contents of the PEM file and had boundaries included in it .setBuffer(x509PemString);",
"new KeyStoreOptions() .setPath(\"path/to/keystore.jks\") .setPassword(\"keystore-password\");",
"new KeyStoreOptions() .setPath(\"path/to/keystore.jks\") // Modern JDKs use `jceks` keystore. But this type is not the default // If the type is not set to `jceks` then probably `pkcs12` will be used .setType(\"jceks\") .setPassword(\"keystore-password\") // optionally if your keys have different passwords // and if a key specific id is not provided it defaults to // the keystore password .putPasswordProtection(\"key-id\", \"key-specific-password\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_4.3_migration_guide/authentication-and-authorization_vertx |
Chapter 3. Installing a cluster on Nutanix | Chapter 3. Installing a cluster on Nutanix In OpenShift Container Platform version 4.18, you can choose one of the following options to install a cluster on your Nutanix instance: Using installer-provisioned infrastructure : Use the procedures in the following sections to use installer-provisioned infrastructure. Installer-provisioned infrastructure is ideal for installing in connected or disconnected network environments. The installer-provisioned infrastructure includes an installation program that provisions the underlying infrastructure for the cluster. Using the Assisted Installer : The Assisted Installer hosted at console.redhat.com . The Assisted Installer cannot be used in disconnected environments. The Assisted Installer does not provision the underlying infrastructure for the cluster, so you must provision the infrastructure before you run the Assisted Installer. Installing with the Assisted Installer also provides integration with Nutanix, enabling autoscaling. See Installing an on-premise cluster using the Assisted Installer for additional details. Using user-provisioned infrastructure : Complete the relevant steps outlined in the Installing a cluster on any platform documentation. 3.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Internet access for Prism Central Prism Central requires internet access to obtain the Red Hat Enterprise Linux CoreOS (RHCOS) image that is required to install the cluster. The RHCOS image for Nutanix is available at rhcos.mirror.openshift.com . 3.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.6. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 3.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Nutanix". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 3.7.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 10 12 15 16 17 18 19 21 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 13 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 14 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.7.2. Configuring failure domains Failure domains improve the fault tolerance of an OpenShift Container Platform cluster by distributing control plane and compute machines across multiple Nutanix Prism Elements (clusters). Tip It is recommended that you configure three failure domains to ensure high-availability. Prerequisites You have an installation configuration file ( install-config.yaml ). Procedure Edit the install-config.yaml file and add the following stanza to configure the first failure domain: apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid> # ... where: <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <prism_element_name> Optional. Specifies the name of the Prism Element. <prism_element_uuid > Specifies the UUID of the Prism Element. <network_uuid > Specifies the one or more UUIDs of the Prism Element subnet objects. Among them, one of the subnet's IP address prefixes (CIDRs) must contain the virtual IP addresses that the OpenShift Container Platform cluster uses. A maximum of 32 subnets for each failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. All subnetUUID values must be unique. As required, configure additional failure domains. To distribute control plane and compute machines across the failure domains, do one of the following: If compute and control plane machines can share the same set of failure domains, add the failure domain names under the cluster's default machine configuration. Example of control plane and compute machines sharing a set of failure domains apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... If compute and control plane machines must use different failure domains, add the failure domain names under the respective machine pools. Example of control plane and compute machines using different failure domains apiVersion: v1 baseDomain: example.com compute: # ... controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 # ... Save the file. 3.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.9. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 3.10. Adding config map and secret resources required for Nutanix CCM Installations on Nutanix require additional ConfigMap and Secret resources to integrate with the Nutanix Cloud Controller Manager (CCM). Prerequisites You have created a manifests directory within your installation directory. Procedure Navigate to the manifests directory: USD cd <path_to_installation_directory>/manifests Create the cloud-conf ConfigMap file with the name openshift-cloud-controller-manager-cloud-config.yaml and add the following information: apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: "{ \"prismCentral\": { \"address\": \"<prism_central_FQDN/IP>\", 1 \"port\": 9440, \"credentialRef\": { \"kind\": \"Secret\", \"name\": \"nutanix-credentials\", \"namespace\": \"openshift-cloud-controller-manager\" } }, \"topologyDiscovery\": { \"type\": \"Prism\", \"topologyCategories\": null }, \"enableCustomLabeling\": true }" 1 Specify the Prism Central FQDN/IP. Verify that the file cluster-infrastructure-02-config.yml exists and has the following information: spec: cloudConfig: key: config name: cloud-provider-config 3.11. Services for a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Configuring a user-managed load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for a user-managed load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for user-managed load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 3.11.1. Configuring a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer. Note MetalLB, which runs on a cluster, functions as a user-managed load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples show health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration. Example HAProxy configuration with one listed subnet # ... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Example HAProxy configuration with multiple listed subnets # ... listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s # ... Use the curl CLI command to verify that the user-managed load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster's install-config.yaml file: # ... platform: nutanix: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3 # ... 1 Set UserManaged for the type parameter to specify a user-managed load balancer for your cluster. The parameter defaults to OpenShiftManagedDefault , which denotes the default internal load balancer. For services defined in an openshift-kni-infra namespace, a user-managed load balancer can deploy the coredns service to pods in your cluster but ignores keepalived and haproxy services. 2 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. 3 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Verification Use the curl CLI command to verify that the user-managed load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 3.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.13. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 3.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 3.15. Additional resources About remote health monitoring 3.16. steps Opt out of remote health reporting Customize your cluster | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3",
"apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"cd <path_to_installation_directory>/manifests",
"apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: \"{ \\\"prismCentral\\\": { \\\"address\\\": \\\"<prism_central_FQDN/IP>\\\", 1 \\\"port\\\": 9440, \\\"credentialRef\\\": { \\\"kind\\\": \\\"Secret\\\", \\\"name\\\": \\\"nutanix-credentials\\\", \\\"namespace\\\": \\\"openshift-cloud-controller-manager\\\" } }, \\\"topologyDiscovery\\\": { \\\"type\\\": \\\"Prism\\\", \\\"topologyCategories\\\": null }, \\\"enableCustomLabeling\\\": true }\"",
"spec: cloudConfig: key: config name: cloud-provider-config",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: nutanix: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_nutanix/installing-nutanix-installer-provisioned |
3.6. Other Schema Resources | 3.6. Other Schema Resources See the following links for more information about standard LDAPv3 schema: RFC 2251: Lightweight Directory Access Protocol (v3), http://www.ietf.org/rfc/rfc2251.txt RFC 2252: LDAPv3 Attribute Syntax Definitions, http://www.ietf.org/rfc/rfc2252.txt RFC 2256: Summary of the X.500 User Schema for Use with LDAPv3, http://www.ietf.org/rfc/rfc2256.txt Internet Engineering Task Force (IETF), http://www.ietf.org/ Understanding and Deploying LDAP Directory Services . T. Howes, M. Smith, G. Good, Macmillan Technical Publishing, 1999. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_schema-other_schema_resources |
Chapter 3. Installing a cluster on OpenStack with customizations | Chapter 3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.12, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.12 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 3.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 3.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Installation configuration parameters section for more information about the available parameters. 3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 3.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 3.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 3.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 3.5. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 3.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 3.6. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 3.11.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 3.11.7. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 3.11.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 3.11.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 3.11.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 3.11.9. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 3.12. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.13. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 3.13.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 3.13.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 3.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.15. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 3.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_openstack/installing-openstack-installer-custom |
3.5. Configuring IP Networking with ifcfg Files | 3.5. Configuring IP Networking with ifcfg Files As a system administrator, you can configure a network interface manually, editing the ifcfg files. Interface configuration (ifcfg) files control the software interfaces for individual network devices. As the system boots, it uses these files to determine what interfaces to bring up and how to configure them. These files are usually named ifcfg- name , where the suffix name refers to the name of the device that the configuration file controls. By convention, the ifcfg file's suffix is the same as the string given by the DEVICE directive in the configuration file itself. Configuring an Interface with Static Network Settings Using ifcfg Files For example, to configure an interface with static network settings using ifcfg files, for an interface with the name enp1s0 , create a file with the name ifcfg-enp1s0 in the /etc/sysconfig/network-scripts/ directory, that contains: For IPv4 configuration DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes PREFIX=24 IPADDR=10.0.1.27 For IPv6 configuration DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes IPV6INIT=yes IPV6ADDR=2001:db8::2/48 You do not need to specify the network or broadcast address as this is calculated automatically by ipcalc . For more IPv6 ifcfg configuration options, see nm-settings-ifcfg-rh (5) man page. Important In Red Hat Enterprise Linux 7, the naming convention for network interfaces has been changed, as explained in Chapter 11, Consistent Network Device Naming . Specifying the hardware or MAC address using HWADDR directive can influence the device naming procedure. Configuring an Interface with Dynamic Network Settings Using ifcfg Files To configure an interface named em1 with dynamic network settings using ifcfg files: Create a file with the name ifcfg-em1 in the /etc/sysconfig/network-scripts/ directory, that contains: DEVICE=em1 BOOTPROTO=dhcp ONBOOT=yes To configure an interface to send a different host name to the DHCP server, add the following line to the ifcfg file: DHCP_HOSTNAME= hostname To configure an interface to send a different fully qualified domain name (FQDN) to the DHCP server, add the following line to the ifcfg file: DHCP_FQDN= fully.qualified.domain.name Note Only one directive, either DHCP_HOSTNAME or DHCP_FQDN , should be used in a given ifcfg file. In case both DHCP_HOSTNAME and DHCP_FQDN are specified, only the latter is used. To configure an interface to use particular DNS servers, add the following lines to the ifcfg file: PEERDNS=no DNS1= ip-address DNS2= ip-address where ip-address is the address of a DNS server. This will cause the network service to update /etc/resolv.conf with the specified DNS servers specified. Only one DNS server address is necessary, the other is optional. To configure static routes in the ifcfg file, see Section 4.5, "Configuring Static Routes in ifcfg files" . By default, NetworkManager calls the DHCP client, dhclient , when a profile has been set to obtain addresses automatically by setting BOOTPROTO to dhcp in an interface configuration file. If DHCP is required, an instance of dhclient is started for every Internet protocol, IPv4 and IPv6 , on an interface. If NetworkManager is not running, or is not managing an interface, then the legacy network service will call instances of dhclient as required. For more details on dynamic IP addresses, see Section 1.2, "Comparing Static to Dynamic IP Addressing" . To apply the configuration: Reload the updated connection files: Re-activate the connection: 3.5.1. Managing System-wide and Private Connection Profiles with ifcfg Files The permissions correspond to the USERS directive in the ifcfg files. If the USERS directive is not present, the network profile will be available to all users. As an example, the following command in an ifcfg file will make the connection available only to the users listed: USERS="joe bob alice" Also, you can set the USERCTL directive to manage the device: If you set yes , non- root users are allowed to control this device. If you set no , non- root users are not allowed to control this device. | [
"DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes PREFIX=24 IPADDR=10.0.1.27",
"DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes IPV6INIT=yes IPV6ADDR=2001:db8::2/48",
"DEVICE=em1 BOOTPROTO=dhcp ONBOOT=yes",
"PEERDNS=no DNS1= ip-address DNS2= ip-address",
"nmcli connection reload",
"nmcli connection up connection_name"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configuring_IP_Networking_with_ifcg_Files |
Chapter 6. Understanding identity provider configuration | Chapter 6. Understanding identity provider configuration The OpenShift Container Platform master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 6.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 6.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . Once an identity provider has been defined, you can use RBAC to define and apply permissions . 6.3. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 6.4. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. generate Provisions a user with the identity's preferred user name. If a user with the preferred user name is already mapped to an existing identity, a unique user name is generated. For example, myuser2 . This method should not be used in combination with external processes that require exact matches between OpenShift Container Platform user names and identity provider user names, such as LDAP group sync. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 6.5. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . | [
"oc delete secrets kubeadmin -n kube-system",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/authentication_and_authorization/understanding-identity-provider |
Chapter 2. Certification prerequisites | Chapter 2. Certification prerequisites Note A strong working knowledge of Red Hat Enterprise Linux and Red Hat OpenShift Container Platform is required. A Red Hat Certified Engineer and a Red Hat Certified Specialist in OpenShift Administration accreditation is preferred and suggested before participating. 2.1. Partner eligibility criteria Ensure to meet the following requirements before applying for a Red Hat bare-metal hardware certification: You are part of the Red Hat Hardware Certification program . You are in a support relationship with Red Hat by means of the TSANet network or a custom support agreement. 2.2. Certification targets The certification targets provide details and requirements about the components and products relevant to the certification. Specific information for each of the certification components is provided when applicable. 2.2.1. Server Assisted installer component certification The server must have earned the following certifications: Red Hat Enterprise Linux System Red Hat OpenShift Container Platform The server must be bare-metal. VMs are not supported. Installer provisioned infrastructure (IPI) component certification Ensure that the server must have the following certifications: Red Hat Enterprise Linux System Red Hat OpenShift Container Platform Each certification is keyed to the specific Cloud Platform product version and its associated ironic revision. You can certify your server for RHOCP, if your hardware is compatible with the ironic drivers for that platform. The server must have a baseboard management controller (BMC) installed. 2.2.2. Red Hat Cloud Platform Products Assisted Installer component certification Through this program you can certify bare metal servers for the following versions of Red Hat OpenShift Container Platform 4.13, 4.14, or 4.15 and RHEL 9.2 or 9.4. IPI component certification Through this program you can certify BMC and bare metal servers for the following versions of Red Hat OpenShift Container Platform 4.12, 4.13, 4.14, or 4.15. 2.2.3. Baseboard management controllers (BMC) IPI component certification A BMC is a specialized microcontroller on a server's motherboard that manages the interface between systems management software and physical hardware. The bare metal service in Red Hat Platforms provisions systems in a cluster by using the BMC to control power, network booting, and automate node deployment and termination. BMC can be certified as a component for use in leveraging components, across multiple server systems. Similar to Red Hat Hardware Certification programs, Red Hat leverages partners' internal quality testing to streamline the certification process without adding risk to customer environments. Red Hat recommends partners using component leveraging features in bare-metal hardware certifications conduct their testing with the specific server system, BMC, and Red Hat cloud platform product to validate each combination. However, you do not need to submit individual certification results to Red Hat for every combination. 2.2.4. Bare Metal Drivers IPI component certification BMCs must use ironic drivers and meet the Red Hat OpenShift Platform Node requirements corresponding to the Red Hat Cloud platform product. You cannot certify a BMC that requires an ironic driver that is not included in the Red Hat product. | null | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openshift_container_platform_hardware_bare_metal_certification_policy_guide/assembly-prerequisites_rhocp-bm-pol-introduction |
Chapter 3. Distribution of content in RHEL 8 | Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/Distribution-of-content-in-RHEL-8 |
Automation Services Catalog product support matrix | Automation Services Catalog product support matrix Red Hat Ansible Automation Platform 2.3 Supported products for Automation Services Catalog Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/automation_services_catalog_product_support_matrix/index |
4.79. gnutls | 4.79. gnutls 4.79.1. RHSA-2012:0429 - Important: gnutls security update Updated gnutls packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The GnuTLS library provides support for cryptographic algorithms and for protocols such as Transport Layer Security (TLS). Security Fixes CVE-2012-1573 A flaw was found in the way GnuTLS decrypted malformed TLS records. This could cause a TLS/SSL client or server to crash when processing a specially-crafted TLS record from a remote TLS/SSL connection peer. CVE-2011-4128 A boundary error was found in the gnutls_session_get_data() function. A malicious TLS/SSL server could use this flaw to crash a TLS/SSL client or, possibly, execute arbitrary code as the client, if the client passed a fixed-sized buffer to gnutls_session_get_data() before checking the real size of the session data provided by the server. Red Hat would like to thank Matthew Hall of Mu Dynamics for reporting CVE-2012-1573 . Users of GnuTLS are advised to upgrade to these updated packages, which contain backported patches to correct these issues. For the update to take effect, all applications linked to the GnuTLS library must be restarted, or the system rebooted. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gnutls |
1.2. Digital Signatures | 1.2. Digital Signatures Tamper detection relies on a mathematical function called a one-way hash (also called a message digest ). A one-way hash is a number of fixed length with the following characteristics: The value of the hash is unique for the hashed data. Any change in the data, even deleting or altering a single character, results in a different value. The content of the hashed data cannot be deduced from the hash. As mentioned in Section 1.1.2, "Public-Key Encryption" , it is possible to use a private key for encryption and the corresponding public key for decryption. Although not recommended when encrypting sensitive information, it is a crucial part of digitally signing any data. Instead of encrypting the data itself, the signing software creates a one-way hash of the data, then uses the private key to encrypt the hash. The encrypted hash, along with other information such as the hashing algorithm, is known as a digital signature. Figure 1.3, "Using a Digital Signature to Validate Data Integrity" illustrates the way a digital signature can be used to validate the integrity of signed data. Figure 1.3. Using a Digital Signature to Validate Data Integrity Figure 1.3, "Using a Digital Signature to Validate Data Integrity" shows two items transferred to the recipient of some signed data: the original data and the digital signature, which is a one-way hash of the original data encrypted with the signer's private key. To validate the integrity of the data, the receiving software first uses the public key to decrypt the hash. It then uses the same hashing algorithm that generated the original hash to generate a new one-way hash of the same data. (Information about the hashing algorithm used is sent with the digital signature.) Finally, the receiving software compares the new hash against the original hash. If the two hashes match, the data has not changed since it was signed. If they do not match, the data may have been tampered with since it was signed, or the signature may have been created with a private key that does not correspond to the public key presented by the signer. If the two hashes match, the recipient can be certain that the public key used to decrypt the digital signature corresponds to the private key used to create the digital signature. Confirming the identity of the signer also requires some way of confirming that the public key belongs to a particular entity. For more information on authenticating users, see Section 1.3, "Certificates and Authentication" . A digital signature is similar to a handwritten signature. Once data have been signed, it is difficult to deny doing so later, assuming the private key has not been compromised. This quality of digital signatures provides a high degree of nonrepudiation; digital signatures make it difficult for the signer to deny having signed the data. In some situations, a digital signature is as legally binding as a handwritten signature. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/introduction_to_public_key_cryptography-digital_signatures |
D.3. Controlling Activation with Tags | D.3. Controlling Activation with Tags You can specify in the configuration file that only certain logical volumes should be activated on that host. For example, the following entry acts as a filter for activation requests (such as vgchange -ay ) and only activates vg1/lvol0 and any logical volumes or volume groups with the database tag in the metadata on that host. There is a special match "@*" that causes a match only if any metadata tag matches any host tag on that machine. As another example, consider a situation where every machine in the cluster has the following entry in the configuration file: If you want to activate vg1/lvol2 only on host db2 , do the following: Run lvchange --addtag @db2 vg1/lvol2 from any host in the cluster. Run lvchange -ay vg1/lvol2 . This solution involves storing host names inside the volume group metadata. | [
"activation { volume_list = [\"vg1/lvol0\", \"@database\" ] }",
"tags { hosttags = 1 }"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/tag_activation |
15.2.2. Removing an LVM2 Logical Volume for Swap | 15.2.2. Removing an LVM2 Logical Volume for Swap To remove a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you want to remove): Procedure 15.4. Remove a swap volume group Disable swapping for the associated logical volume: Remove the LVM2 logical volume of size 512 MB: Remove the following entry from the /etc/fstab file: /dev/VolGroup00/LogVol02 swap swap defaults 0 0 To test if the logical volume size was successfully removed, use cat /proc/swaps or free to inspect the swap space. | [
"swapoff -v /dev/VolGroup00/LogVol02",
"lvremove /dev/VolGroup00/LogVol02"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/swap-removing-lvm2 |
Migrating from version 3 to 4 | Migrating from version 3 to 4 OpenShift Container Platform 4.10 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | [
"oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi8 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv` AZURE_CLIENT_ID=`az ad sp list --display-name \"velero\" --query '[0].appId' -o tsv`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc",
"registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator",
"containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc create -f controller.yml",
"oc sa get-token migration-controller -n openshift-migration",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"oc get pv",
"oc get pods --all-namespaces | egrep -v 'Running | Completed'",
"oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'",
"oc get csr -A | grep pending -i",
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc sa get-token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.7):/crane ./",
"oc config view",
"crane tunnel-api [--namespace <namespace>] --destination-context <destination-cluster> --source-context <source-cluster>",
"crane tunnel-api --namespace my_tunnel --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin --source-context default/192-168-122-171-nip-io:8443/admin",
"oc get po -n <namespace>",
"NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s",
"oc logs -f -n <namespace> <pod_name> -c openvpn",
"oc get service -n <namespace>",
"oc sa get-token -n openshift-migration migration-controller",
"oc create route passthrough --service=docker-registry -n default",
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe cluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11",
"oc -n openshift-migration get pods | grep log",
"oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump",
"tar -xvzf must-gather/metrics/prom_data.tar.gz",
"make prometheus-run",
"Started Prometheus on http://localhost:9090",
"make prometheus-cleanup",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"oc get migmigration <migmigration> -o yaml",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>",
"Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>",
"time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"oc get migmigration -n openshift-migration",
"NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s",
"oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration",
"name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0",
"apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15",
"podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>",
"podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>",
"podman pull <registry_url>:<port>/openshift/<image>",
"podman tag <registry_url>:<port>/openshift/<image> \\ 1 <registry_url>:<port>/openshift/<image> 2",
"podman push <registry_url>:<port>/openshift/<image> 1",
"oc get imagestream -n openshift | grep <image>",
"NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago",
"oc describe migmigration <pod> -n openshift-migration",
"Some or all transfer pods are not running for more than 10 mins on destination cluster",
"oc get namespace <namespace> -o yaml 1",
"oc edit namespace <namespace>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"",
"echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2",
"oc logs <Velero_Pod> -n openshift-migration",
"level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1",
"spec: restic_timeout: 1h 1",
"status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2",
"oc describe <registry-example-migration-rvwcm> -n openshift-migration",
"status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration",
"oc describe <migration-example-rvwcm-98t49>",
"completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>",
"oc logs -f <restic-nr2v5>",
"backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration",
"spec: restic_supplemental_groups: <group_id> 1",
"spec: restic_supplemental_groups: - 5555 - 6666",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF",
"oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1",
"oc scale deployment <deployment> --replicas=<premigration_replicas>",
"apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"",
"oc get pod -n <namespace>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/migrating_from_version_3_to_4/index |
Chapter 1. About specialized hardware and driver enablement | Chapter 1. About specialized hardware and driver enablement The Driver Toolkit (DTK) is a container image in the OpenShift Container Platform payload which is meant to be used as a base image on which to build driver containers. The Driver Toolkit image contains the kernel packages commonly required as dependencies to build or install kernel modules as well as a few tools needed in driver containers. The version of these packages will match the kernel version running on the RHCOS nodes in the corresponding OpenShift Container Platform release. Driver containers are container images used for building and deploying out-of-tree kernel modules and drivers on container operating systems such as Red Hat Enterprise Linux CoreOS (RHCOS). Kernel modules and drivers are software libraries running with a high level of privilege in the operating system kernel. They extend the kernel functionalities or provide the hardware-specific code required to control new devices. Examples include hardware devices like field-programmable gate arrays (FPGA) or graphics processing units (GPU), and software-defined storage solutions, which all require kernel modules on client machines. Driver containers are the first layer of the software stack used to enable these technologies on OpenShift Container Platform deployments. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/specialized_hardware_and_driver_enablement/about-hardware-enablement |
Chapter 12. Network Observability CLI | Chapter 12. Network Observability CLI 12.1. Installing the Network Observability CLI The Network Observability CLI ( oc netobserv ) is deployed separately from the Network Observability Operator. The CLI is available as an OpenShift CLI ( oc ) plugin. It provides a lightweight way to quickly debug and troubleshoot with network observability. 12.1.1. About the Network Observability CLI You can quickly debug and troubleshoot networking issues by using the Network Observability CLI ( oc netobserv ). The Network Observability CLI is a flow and packet visualization tool that relies on eBPF agents to stream collected data to an ephemeral collector pod. It requires no persistent storage during the capture. After the run, the output is transferred to your local machine. This enables quick, live insight into packets and flow data without installing the Network Observability Operator. Important CLI capture is meant to run only for short durations, such as 8-10 minutes. If it runs for too long, it can be difficult to delete the running process. 12.1.2. Installing the Network Observability CLI Installing the Network Observability CLI ( oc netobserv ) is a separate procedure from the Network Observability Operator installation. This means that, even if you have the Operator installed from OperatorHub, you need to install the CLI separately. Note You can optionally use Krew to install the netobserv CLI plugin. For more information, see "Installing a CLI plugin with Krew". Prerequisites You must install the OpenShift CLI ( oc ). You must have a macOS or Linux operating system. Procedure Download the oc netobserv file that corresponds with your architecture. For example, for the amd64 archive: USD curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64 Make the file executable: USD chmod +x ./oc-netobserv-amd64 Move the extracted netobserv-cli binary to a directory that is on your PATH , such as /usr/local/bin/ : USD sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv Verification Verify that oc netobserv is available: USD oc netobserv version Example output Netobserv CLI version <version> Additional resources Installing and using CLI plugins Installing the CLI Manager Operator 12.2. Using the Network Observability CLI You can visualize and filter the flows and packets data directly in the terminal to see specific usage, such as identifying who is using a specific port. The Network Observability CLI collects flows as JSON and database files or packets as a PCAP file, which you can use with third-party tools. 12.2.1. Capturing flows You can capture flows and filter on any resource or zone in the data to solve use cases, such as displaying Round-Trip Time (RTT) between two zones. Table visualization in the CLI provides viewing and flow search capabilities. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture flows with filters enabled by running the following command: USD oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to further refine the incoming flows. For example: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . The data that was captured is written to two separate files in an ./output directory located in the same path used to install the CLI. View the captured data in the ./output/flow/<capture_date_time>.json JSON file, which contains JSON arrays of the captured data. Example JSON file { "AgentIP": "10.0.1.76", "Bytes": 561, "DnsErrno": 0, "Dscp": 20, "DstAddr": "f904:ece9:ba63:6ac7:8018:1e5:7130:0", "DstMac": "0A:58:0A:80:00:37", "DstPort": 9999, "Duplicate": false, "Etype": 2048, "Flags": 16, "FlowDirection": 0, "IfDirection": 0, "Interface": "ens5", "K8S_FlowLayer": "infra", "Packets": 1, "Proto": 6, "SrcAddr": "3e06:6c10:6440:2:a80:37:b756:270f", "SrcMac": "0A:58:0A:80:00:01", "SrcPort": 46934, "TimeFlowEndMs": 1709741962111, "TimeFlowRttNs": 121000, "TimeFlowStartMs": 1709741962111, "TimeReceived": 1709741964 } You can use SQLite to inspect the ./output/flow/<capture_date_time>.db database file. For example: Open the file by running the following command: USD sqlite3 ./output/flow/<capture_date_time>.db Query the data by running a SQLite SELECT statement, for example: sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10; Example output 12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1 12.2.2. Capturing packets You can capture packets using the Network Observability CLI. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Run the packet capture with filters enabled: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to refine the incoming packets. An example filter is as follows: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . View the captured data, which is written to a single file in an ./output/pcap directory located in the same path that was used to install the CLI: The ./output/pcap/<capture_date_time>.pcap file can be opened with Wireshark. 12.2.3. Capturing metrics You can generate on-demand dashboards in Prometheus by using a service monitor for Network Observability. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture metrics with filters enabled by running the following command: Example output USD oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Open the link provided in the terminal to view the NetObserv / On-Demand dashboard: Example URL https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli Note Features that are not enabled present as empty graphs. 12.2.4. Cleaning the Network Observability CLI You can manually clean the CLI workload by running oc netobserv cleanup . This command removes all the CLI components from your cluster. When you end a capture, this command is run automatically by the client. You might be required to manually run it if you experience connectivity issues. Procedure Run the following command: USD oc netobserv cleanup Additional resources Network Observability CLI reference 12.3. Network Observability CLI (oc netobserv) reference The Network Observability CLI ( oc netobserv ) has most features and filtering options that are available for the Network Observability Operator. You can pass command line arguments to enable features or filtering options. 12.3.1. Network Observability CLI usage You can use the Network Observability CLI ( oc netobserv ) to pass command line arguments to capture flows data, packets data, and metrics for further analysis and enable features supported by the Network Observability Operator. 12.3.1.1. Syntax The basic syntax for oc netobserv commands: oc netobserv syntax USD oc netobserv [<command>] [<feature_option>] [<command_options>] 1 1 1 Feature options can only be used with the oc netobserv flows command. They cannot be used with the oc netobserv packets command. 12.3.1.2. Basic commands Table 12.1. Basic commands Command Description flows Capture flows information. For subcommands, see the "Flows capture options" table. packets Capture packets data. For subcommands, see the "Packets capture options" table. metrics Capture metrics data. For subcommands, see the "Metrics capture options" table. follow Follow collector logs when running in background. stop Stop collection by removing agent daemonset. copy Copy collector generated files locally. cleanup Remove the Network Observability CLI components. version Print the software version. help Show help. 12.3.1.3. Flows capture options Flows capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering. oc netobserv flows syntax USD oc netobserv flows [<feature_option>] [<command_options>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled: USD oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.4. Packets capture options You can filter packets capture data the as same as flows capture by using the filters. Certain features, such as packets drop, DNS, RTT, and network events, are only available for flows and metrics capture. oc netobserv packets syntax USD oc netobserv packets [<option>] Option Description Default --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - Example running packets capture on TCP protocol and port 49051: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.5. Metrics capture options You can enable features and use filters on metrics capture, the same as flows capture. The generated graphs fill accordingly in the dashboard. oc netobserv metrics syntax USD oc netobserv metrics [<option>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running metrics capture for TCP drops USD oc netobserv metrics --enable_pkt_drop --protocol=TCP | [
"curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64",
"chmod +x ./oc-netobserv-amd64",
"sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv",
"oc netobserv version",
"Netobserv CLI version <version>",
"oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once",
"{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }",
"sqlite3 ./output/flow/<capture_date_time>.db",
"sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;",
"12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1",
"oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once",
"oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli",
"oc netobserv cleanup",
"oc netobserv [<command>] [<feature_option>] [<command_options>] 1",
"oc netobserv flows [<feature_option>] [<command_options>]",
"oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"oc netobserv packets [<option>]",
"oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"oc netobserv metrics [<option>]",
"oc netobserv metrics --enable_pkt_drop --protocol=TCP"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_observability/network-observability-cli-1 |
Chapter 94. ExternalConfigurationEnv schema reference | Chapter 94. ExternalConfigurationEnv schema reference The type ExternalConfigurationEnv has been deprecated. Please use ContainerEnvVar instead. Used in: ExternalConfiguration Property Property type Description name string Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with KAFKA_ or STRIMZI_ . valueFrom ExternalConfigurationEnvVarSource Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-externalconfigurationenv-reference |
Chapter 10. Network configuration | Chapter 10. Network configuration The following sections describe the basics of network configuration with the Assisted Installer. 10.1. Cluster networking There are various network types and addresses used by OpenShift and listed in the following table. Important IPv6 is not currently supported in the following configurations: Single stack Primary within dual stack Type DNS Description clusterNetwork The IP address pools from which pod IP addresses are allocated. serviceNetwork The IP address pool for services. machineNetwork The IP address blocks for machines forming the cluster. apiVIP api.<clustername.clusterdomain> The VIP to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. apiVIPs api.<clustername.clusterdomain> The VIPs to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the apiVIP setting. ingressVIP *.apps.<clustername.clusterdomain> The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. ingressVIPs *.apps.<clustername.clusterdomain> The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the ingressVIP setting. Note OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept many IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration by using the API. Currently, the Assisted Service can deploy OpenShift Container Platform clusters by using one of the following configurations: IPv4 Dual-stack (IPv4 + IPv6 with IPv4 as primary) Note OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases. 10.1.1. Limitations 10.1.1.1. SDN The SDN controller is not supported with single-node OpenShift. The SDN controller does not support dual-stack networking. The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes. 10.1.1.2. OVN-Kubernetes For more information, see About the OVN-Kubernetes network plugin . 10.1.2. Cluster network The cluster network is a network from which every pod deployed in the cluster gets its IP address. Given that the workload might live across many nodes forming the cluster, it is important for the network provider to be able to easily find an individual node based on the pod's IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix . The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster might assign addresses for the multi-node cluster: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Creating a 3-node cluster by using this snippet might create the following network topology: Pods scheduled in node #1 get IPs from 10.128.0.0/23 Pods scheduled in node #2 get IPs from 10.128.2.0/23 Pods scheduled in node #3 get IPs from 10.128.4.0/23 Explaining OVN-Kubernetes internals is out of scope for this document, but the pattern previously described provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes. 10.1.3. Machine network The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs. For iSCSI boot volumes, the hosts are connected over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. During the installation process, ensure that you specify the OpenShift Container Platform network. Using the iSCSI network will result in an error for the host. 10.1.4. Single-node OpenShift compared to multi-node cluster Depending on whether you are deploying single-node OpenShift or a multi-node cluster, different values are mandatory. The following table explains this in more detail. Parameter Single-node OpenShift Multi-node cluster with DHCP mode Multi-node cluster without DHCP mode clusterNetwork Required Required Required serviceNetwork Required Required Required machineNetwork Auto-assign possible (*) Auto-assign possible (*) Auto-assign possible (*) apiVIP Forbidden Forbidden Required apiVIPs Forbidden Forbidden Required in 4.12 and later releases ingressVIP Forbidden Forbidden Required ingressVIPs Forbidden Forbidden Required in 4.12 and later releases (*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly. 10.1.5. Air-gapped environments The workflow for deploying a cluster without Internet access has some prerequisites, which are out of scope of this document. You can consult the Zero Touch Provisioning the hard way Git repository for some insights. 10.2. VIP DHCP allocation The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server. If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network. Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. Important VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later. 10.2.1. Example payload to enable autoallocation { "vip_dhcp_allocation": true, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ], "machine_networks": [ { "cidr": "192.168.127.0/24" } ] } 10.2.2. Example payload to disable autoallocation { "api_vips": [ { "ip": "192.168.127.100" } ], "ingress_vips": [ { "ip": "192.168.127.101" } ], "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ] } 10.3. Additional resources Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses. 10.4. Understanding differences between user- and cluster-managed networking User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include: Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses. Deployments with cluster nodes distributed across many distinct L2 network segments. 10.4.1. Validations There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change: The L3 connectivity check (ICMP) is performed instead of the L2 check (ARP). The MTU validation verifies the maximum transmission unit (MTU) value for all interfaces and not only for the machine network. 10.5. Static network configuration You may use static network configurations when generating or updating the discovery ISO. 10.5.1. Prerequisites You are familiar with NMState . 10.5.2. NMState configuration The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time. 10.5.2.1. Example of NMState configuration dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.126.1 -hop-interface: eth0 table-id: 254 10.5.3. MAC interface mapping MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host. The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces. 10.5.3.1. Example of MAC interface mapping mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] 10.5.4. Additional NMState configuration examples The following examples are only meant to show a partial configuration. They are not meant for use as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they can leave your machines with no network connectivity. 10.5.4.1. Tagged VLAN interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true 10.5.4.2. Network bond interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: "140" port: - eth0 - eth1 name: bond0 state: up type: bond 10.6. Applying a static network configuration with the API You can apply a static network configuration by using the Assisted Installer API. Important A static IP configuration is not supported in the following scenarios: OpenShift Container Platform installations on Oracle Cloud Infrastructure. OpenShift Container Platform installations on iSCSI boot volumes. Prerequisites You have created an infrastructure environment using the API or have created a cluster using the web console. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml . Procedure Create a temporary file /tmp/request-body.txt with the API request: jq -n --arg NMSTATE_YAML1 "USD(cat server-a.yaml)" --arg NMSTATE_YAML2 "USD(cat server-b.yaml)" \ '{ "static_network_config": [ { "network_yaml": USDNMSTATE_YAML1, "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}] }, { "network_yaml": USDNMSTATE_YAML2, "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}] } ] }' >> /tmp/request-body.txt Refresh the API token: USD source refresh-token Send the request to the Assisted Service API endpoint: USD curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID 10.7. Additional resources Applying a static network configuration with the web console 10.8. Converting to dual-stack networking Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets. 10.8.1. Prerequisites You are familiar with OVN-K8s documentation 10.8.2. Example payload for single-node OpenShift { "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } 10.8.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes { "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "api_vips": [ { "ip": "192.168.127.100" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334" } ], "ingress_vips": [ { "ip": "192.168.127.101" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335" } ], "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } 10.8.4. Limitations The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values. 10.9. Additional resources Understanding OpenShift networking About the OpenShift SDN network plugin OVN-Kubernetes - CNI network provider Dual-stack Service configuration scenarios Installing a user-provisioned bare metal cluster with network customizations . Cluster Network Operator configuration object | [
"clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"{ \"vip_dhcp_allocation\": true, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ], \"machine_networks\": [ { \"cidr\": \"192.168.127.0/24\" } ] }",
"{ \"api_vips\": [ { \"ip\": \"192.168.127.100\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" } ], \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ] }",
"dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.126.1 next-hop-interface: eth0 table-id: 254",
"mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ]",
"interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true",
"interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: \"140\" port: - eth0 - eth1 name: bond0 state: up type: bond",
"jq -n --arg NMSTATE_YAML1 \"USD(cat server-a.yaml)\" --arg NMSTATE_YAML2 \"USD(cat server-b.yaml)\" '{ \"static_network_config\": [ { \"network_yaml\": USDNMSTATE_YAML1, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:2c:23:a5\", \"logical_nic_name\": \"eth0\"}, {\"mac_address\": \"02:00:00:68:73:dc\", \"logical_nic_name\": \"eth1\"}] }, { \"network_yaml\": USDNMSTATE_YAML2, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:9f:85:eb\", \"logical_nic_name\": \"eth1\"}, {\"mac_address\": \"02:00:00:c8:be:9b\", \"logical_nic_name\": \"eth0\"}] } ] }' >> /tmp/request-body.txt",
"source refresh-token",
"curl -H \"Content-Type: application/json\" -X PATCH -d @/tmp/request-body.txt -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID",
"{ \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] }",
"{ \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"api_vips\": [ { \"ip\": \"192.168.127.100\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7334\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7335\" } ], \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_network-configuration |
Chapter 4. Performing additional configuration on Satellite Server | Chapter 4. Performing additional configuration on Satellite Server 4.1. Using Red Hat Insights with Satellite Server You can use Red Hat Insights to diagnose systems and downtime related to security exploits, performance degradation and stability failures. You can use the dashboard to quickly identify key risks to stability, security, and performance. You can sort by category, view details of the impact and resolution, and then determine what systems are affected. Note that you do not require a Red Hat Insights entitlement in your subscription manifest. For more information about Satellite and Red Hat Insights, see Red Hat Insights on Satellite Red Hat Enterprise Linux (RHEL) . To maintain your Satellite Server, and improve your ability to monitor and diagnose problems you might have with Satellite, install Red Hat Insights on Satellite Server and register Satellite Server with Red Hat Insights. Scheduling insights-client Note that you can change the default schedule for running insights-client by configuring insights-client.timer on Satellite. For more information, see Changing the insights-client schedule in the Client Configuration Guide for Red Hat Insights . Procedure To install Red Hat Insights on Satellite Server, enter the following command: To register Satellite Server with Red Hat Insights, enter the following command: 4.2. Disabling Red Hat Insights registration If you decide that you will not use Red Hat Insights, you can unregister Satellite Server from Insights. Prerequisites You have registered Satellite to Red Hat Insights. Procedure To unregister Satellite Server from Red Hat Insights, enter the following command: 4.3. Importing the Red Hat Satellite Client 6 repository The Red Hat Satellite Client 6 repository provides client integration tools, such as katello-host-tools or puppet-agent packages, for hosts registered to Satellite. You must enable the repository, synchronize the repository to your Satellite Server, and enable the repository on your hosts. 4.3.1. Enabling the Red Hat Satellite Client 6 repository Enable the Red Hat Satellite Client 6 repository for every major version of Red Hat Enterprise Linux that you intend to run on your hosts. After enabling a Red Hat repository, Satellite creates a product for this repository automatically. Prerequisites Ensure that a subscription manifest has been imported to your organization. For more information, see Section 3.7, "Importing a Red Hat subscription manifest into Satellite Server" . Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . Ensure that the RPM repository type is selected. In the search field, type name ~ "Satellite Client" and press Enter . Optionally, enable the Recommended Repositories filter to limit the results. Click the name of the required repository to expand the repository set. For the required architecture, click the + icon to enable the repository. 4.3.2. Synchronizing the Red Hat Satellite Client 6 repository Synchronize the Red Hat Satellite Client 6 repository to import the content to your Satellite Server. Prerequisites You have enabled the Red Hat Satellite Client 6 repository. Procedure In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the required product to view available repositories. Select the repositories you want to synchronize. Click Synchronize Now . Additional resources You can create a sync plan to update the content regularly. For more information, see Creating a sync plan in Managing content . 4.4. Configuring pull-based transport for remote execution By default, remote execution uses push-based SSH as the transport mechanism for the Script provider. If your infrastructure prohibits outgoing connections from Satellite Server to hosts, you can use remote execution with pull-based transport instead, because the host initiates the connection to Satellite Server. The use of pull-based transport is not limited to those infrastructures. The pull-based transport comprises pull-mqtt mode on Capsules in combination with a pull client running on hosts. Note The pull-mqtt mode works only with the Script provider. Ansible and other providers will continue to use their default transport settings. Procedure Enable the pull-based transport on your Satellite Server: Configure the firewall to allow the MQTT service on port 1883: Make the changes persistent: In pull-mqtt mode, hosts subscribe for job notifications to either your Satellite Server or any Capsule Server through which they are registered. Ensure that Satellite Server sends remote execution jobs to that same Satellite Server or Capsule Server: In the Satellite web UI, navigate to Administer > Settings . On the Content tab, set the value of Prefer registered through Capsule for remote execution to Yes . steps Configure your hosts for the pull-based transport. For more information, see Transport modes for remote execution in Managing hosts . 4.5. Configuring Satellite for UEFI HTTP boot provisioning in an IPv6 network Use this procedure to configure Satellite to provision hosts in an IPv6 network with UEFI HTTP Boot provisioning. Prerequisites Ensure that your clients can access DHCP and HTTP servers. Ensure that the UDP ports 67 and 68 are accessible by clients so clients can send DHCP requests and receive DHCP offers. Ensure that the TCP port 8000 is open for clients to download files and Kickstart templates from Satellite and Capsules. Ensure that the host provisioning interface subnet has an HTTP Boot Capsule, and Templates Capsule set. For more information, see Adding a Subnet to Satellite Server in Provisioning hosts . In the Satellite web UI, navigate to Administer > Settings > Provisioning and ensure that the Token duration setting is not set to 0 . Satellite cannot identify clients that are booting from the network by a remote IPv6 address because of unmanaged DHCPv6 service, therefore provisioning tokens must be enabled. Procedure You must disable DHCP management in the installer or not use it. For all IPv6 subnets created in Satellite, set the DHCP Capsule to blank. Optional: If the host and the DHCP server are separated by a router, configure the DHCP relay agent and point to the DHCP server. On Satellite or Capsule from which you provision, update the grub2-efi package to the latest version: 4.6. Configuring Satellite Server with an HTTP proxy Use the following procedures to configure Satellite with an HTTP proxy. 4.6.1. Adding a default HTTP proxy to Satellite If your network uses an HTTP Proxy, you can configure Satellite Server to use an HTTP proxy for requests to the Red Hat Content Delivery Network (CDN) or another content source. Use the FQDN instead of the IP address where possible to avoid losing connectivity because of network changes. The following procedure configures a proxy only for downloading content for Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > HTTP Proxies . Click New HTTP Proxy . In the Name field, enter the name for the HTTP proxy. In the Url field, enter the URL of the HTTP proxy in the following format: https://http-proxy.example.com:8080 . Optional: If authentication is required, in the Username field, enter the username to authenticate with. Optional: If authentication is required, in the Password field, enter the password to authenticate with. To test connection to the proxy, click Test Connection . Click Submit . In the Satellite web UI, navigate to Administer > Settings , and click the Content tab. Set the Default HTTP Proxy setting to the created HTTP proxy. CLI procedure Verify that the http_proxy , https_proxy , and no_proxy variables are not set: Add an HTTP proxy entry to Satellite: Configure Satellite to use this HTTP proxy by default: 4.6.2. Configuring SELinux to ensure access to Satellite on custom ports SELinux ensures access of Red Hat Satellite and Subscription Manager only to specific ports. In the case of the HTTP cache, the TCP ports are 8080, 8118, 8123, and 10001 - 10010. If you use a port that does not have SELinux type http_cache_port_t , complete the following steps. Procedure On Satellite, to verify the ports that are permitted by SELinux for the HTTP cache, enter a command as follows: To configure SELinux to permit a port for the HTTP cache, for example 8088, enter a command as follows: 4.6.3. Using an HTTP proxy for all Satellite HTTP requests If your Satellite Server must remain behind a firewall that blocks HTTP and HTTPS, you can configure a proxy for communication with external systems, including compute resources. Note that if you are using compute resources for provisioning, and you want to use a different HTTP proxy with the compute resources, the proxy that you set for all Satellite communication takes precedence over the proxies that you set for compute resources. Procedure In the Satellite web UI, navigate to Administer > Settings . In the HTTP(S) proxy row, select the adjacent Value column and enter the proxy URL. Click the tick icon to save your changes. CLI procedure Enter the following command: 4.6.4. Excluding hosts from receiving proxied requests If you use an HTTP Proxy for all Satellite HTTP or HTTPS requests, you can prevent certain hosts from communicating through the proxy. Procedure In the Satellite web UI, navigate to Administer > Settings . In the HTTP(S) proxy except hosts row, select the adjacent Value column and enter the names of one or more hosts that you want to exclude from proxy requests. Click the tick icon to save your changes. CLI procedure Enter the following command: 4.6.5. Resetting the HTTP proxy If you want to reset the current HTTP proxy setting, unset the Default HTTP Proxy setting. Procedure In the Satellite web UI, navigate to Administer > Settings , and click the Content tab. Set the Default HTTP Proxy setting to no global default . CLI procedure Set the content_default_http_proxy setting to an empty string: 4.7. Enabling power management on hosts To perform power management tasks on hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on Satellite Server. Prerequisites All hosts must have a network interface of BMC type. Satellite Server uses this NIC to pass the appropriate credentials to the host. For more information, see Adding a Baseboard Management Controller (BMC) Interface in Managing hosts . Procedure To enable BMC, enter the following command: 4.8. Configuring DNS, DHCP, and TFTP You can manage DNS, DHCP, and TFTP centrally within the Satellite environment, or you can manage them independently after disabling their maintenance on Satellite. You can also run DNS, DHCP, and TFTP externally, outside of the Satellite environment. 4.8.1. Configuring DNS, DHCP, and TFTP on Satellite Server To configure the DNS, DHCP, and TFTP services on Satellite Server, use the satellite-installer command with the options appropriate for your environment. Any changes to the settings require entering the satellite-installer command again. You can enter the command multiple times and each time it updates all configuration files with the changed values. Prerequisites Ensure that the following information is available to you: DHCP IP address ranges DHCP gateway IP address DHCP nameserver IP address DNS information TFTP server name Use the FQDN instead of the IP address where possible in case of network changes. Contact your network administrator to ensure that you have the correct settings. Procedure Enter the satellite-installer command with the options appropriate for your environment. The following example shows configuring full provisioning services: You can monitor the progress of the satellite-installer command displayed in your prompt. You can view the logs in /var/log/foreman-installer/satellite.log . Additional resources For more information about the satellite-installer command, enter satellite-installer --help . 4.8.2. Disabling DNS, DHCP, and TFTP for unmanaged networks If you want to manage TFTP, DHCP, and DNS services manually, you must prevent Satellite from maintaining these services on the operating system and disable orchestration to avoid DHCP and DNS validation errors. However, Satellite does not remove the back-end services on the operating system. Procedure On Satellite Server, enter the following command: In the Satellite web UI, navigate to Infrastructure > Subnets and select a subnet. Click the Capsules tab and clear the DHCP Capsule , TFTP Capsule , and Reverse DNS Capsule fields. In the Satellite web UI, navigate to Infrastructure > Domains and select a domain. Clear the DNS Capsule field. Optional: If you use a DHCP service supplied by a third party, configure your DHCP server to pass the following options: For more information about DHCP options, see RFC 2132 . Note Satellite does not perform orchestration when a Capsule is not set for a given subnet and domain. When enabling or disabling Capsule associations, orchestration commands for existing hosts can fail if the expected records and configuration files are not present. When associating a Capsule to turn orchestration on, ensure the required DHCP and DNS records as well as the TFTP files are in place for the existing Satellite hosts in order to prevent host deletion failures in the future. 4.8.3. Additional resources For more information about configuring DNS, DHCP, and TFTP externally, see Chapter 5, Configuring Satellite Server with external services . For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in Provisioning hosts . 4.9. Configuring Satellite Server for outgoing emails To send email messages from Satellite Server, you can use either an SMTP server, or the sendmail command. Prerequisites Some SMTP servers with anti-spam protection or grey-listing features are known to cause problems. To setup outgoing email with such a service either install and configure a vanilla SMTP service on Satellite Server for relay or use the sendmail command instead. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Email tab and set the configuration options to match your preferred delivery method. The changes have an immediate effect. The following example shows the configuration options for using an SMTP server: Table 4.1. Using an SMTP server as a delivery method Name Example value Delivery method SMTP SMTP address smtp.example.com SMTP authentication login SMTP HELO/EHLO domain example.com SMTP password password SMTP port 25 SMTP username [email protected] The SMTP username and SMTP password specify the login credentials for the SMTP server. The following example uses gmail.com as an SMTP server: Table 4.2. Using gmail.com as an SMTP server Name Example value Delivery method SMTP SMTP address smtp.gmail.com SMTP authentication plain SMTP HELO/EHLO domain smtp.gmail.com SMTP enable StartTLS auto Yes SMTP password password SMTP port 587 SMTP username user @gmail.com The following example uses the sendmail command as a delivery method: Table 4.3. Using sendmail as a delivery method Name Example value Delivery method Sendmail Sendmail location /usr/sbin/sendmail Sendmail arguments -i For security reasons, both Sendmail location and Sendmail argument settings are read-only and can be only set in /etc/foreman/settings.yaml . Both settings currently cannot be set via satellite-installer . For more information see the sendmail 1 man page. If you decide to send email using an SMTP server which uses TLS authentication, also perform one of the following steps: Mark the CA certificate of the SMTP server as trusted. To do so, execute the following commands on Satellite Server: Where mailca.crt is the CA certificate of the SMTP server. Alternatively, in the Satellite web UI, set the SMTP enable StartTLS auto option to No . Click Test email to send a test message to the user's email address to confirm the configuration is working. If a message fails to send, the Satellite web UI displays an error. See the log at /var/log/foreman/production.log for further details. Additional resources For information on configuring email notifications for individual users or user groups, see Configuring Email Notification Preferences in Administering Red Hat Satellite . 4.10. Configuring an alternate CNAME for Satellite You can configure an alternate CNAME for Satellite. This might be useful if you want to deploy the Satellite web interface on a different domain name than the one that is used by client systems to connect to Satellite. You must plan the alternate CNAME configuration in advance prior to installing Capsules and registering hosts to Satellite to avoid redeploying new certificates to hosts. 4.10.1. Configuring Satellite with an alternate CNAME Use this procedure to configure Satellite with an alternate CNAME. Note that the procedures for users of a default Satellite certificate and custom certificate differ. For default Satellite certificate users If you have installed Satellite with a default Satellite certificate and want to configure Satellite with an alternate CNAME, enter the following command on Satellite to generate a new default Satellite SSL certificate with an additional CNAME. If you have not installed Satellite, you can add the --certs-cname alternate_fqdn option to the satellite-installer command to install Satellite with an alternate CNAME. For custom certificate users If you use Satellite with a custom certificate, when creating a custom certificate, include the alternate CNAME records to the custom certificate. For more information, see Creating a Custom SSL Certificate for Satellite Server . 4.10.2. Configuring hosts to use an alternate Satellite CNAME for content management If Satellite is configured with an alternate CNAME, you can configure hosts to use the alternate Satellite CNAME for content management. To do this, you must point hosts to the alternate Satellite CNAME prior to registering the hosts to Satellite. You can do this using the bootstrap script or manually. Configuring hosts with the bootstrap script On the host, run the bootstrap script with the --server alternate_fqdn.example.com option to register the host to the alternate Satellite CNAME: Configuring hosts manually On the host, edit the /etc/rhsm/rhsm.conf file to update hostname and baseurl settings to point to the alternate host name, for example: [server] # Server hostname: hostname = alternate_fqdn.example.com content omitted [rhsm] # Content base URL: baseurl=https:// alternate_fqdn.example.com /pulp/content/ Now you can register the host with the subscription-manager . 4.11. Configuring Satellite Server with a custom SSL certificate By default, Red Hat Satellite uses a self-signed SSL certificate to enable encrypted communications between Satellite Server, external Capsule Servers, and all hosts. If you cannot use a Satellite self-signed certificate, you can configure Satellite Server to use an SSL certificate signed by an external certificate authority (CA). When you configure Red Hat Satellite with custom SSL certificates, you must fulfill the following requirements: You must use the privacy-enhanced mail (PEM) encoding for the SSL certificates. You must not use the same SSL certificate for both Satellite Server and Capsule Server. The same CA must sign certificates for Satellite Server and Capsule Server. An SSL certificate must not also be a CA certificate. An SSL certificate must include a subject alt name (SAN) entry that matches the common name (CN). An SSL certificate must be allowed for Key Encipherment using a Key Usage extension. An SSL certificate must not have a shortname as the CN. You must not set a passphrase for the private key. To configure your Satellite Server with a custom certificate, complete the following procedures: Section 4.11.1, "Creating a custom SSL certificate for Satellite Server" Section 4.11.2, "Deploying a custom SSL certificate to Satellite Server" Section 4.11.3, "Deploying a custom SSL certificate to hosts" If you have external Capsule Servers registered to Satellite Server, configure them with custom SSL certificates. For more information, see Configuring Capsule Server with a Custom SSL Certificate in Installing Capsule Server . 4.11.1. Creating a custom SSL certificate for Satellite Server Use this procedure to create a custom SSL certificate for Satellite Server. If you already have a custom SSL certificate for Satellite Server, skip this procedure. Procedure To store all the source certificate files, create a directory that is accessible only to the root user: Create a private key with which to sign the certificate signing request (CSR). Note that the private key must be unencrypted. If you use a password-protected private key, remove the private key password. If you already have a private key for this Satellite Server, skip this step. Create the /root/satellite_cert/openssl.cnf configuration file for the CSR and include the following content: Optional: If you want to add Distinguished Name (DN) details to the CSR, add the following information to the [ req_distinguished_name ] section: 1 Two letter code 2 Full name 3 Full name (example: New York) 4 Division responsible for the certificate (example: IT department) Generate CSR: 1 Path to the private key 2 Path to the configuration file 3 Path to the CSR to generate Send the certificate signing request to the certificate authority (CA). The same CA must sign certificates for Satellite Server and Capsule Server. When you submit the request, specify the lifespan of the certificate. The method for sending the certificate request varies, so consult the CA for the preferred method. In response to the request, you can expect to receive a CA bundle and a signed certificate, in separate files. 4.11.2. Deploying a custom SSL certificate to Satellite Server Use this procedure to configure your Satellite Server to use a custom SSL certificate signed by a Certificate Authority. Important Do not store the SSL certificates or .tar bundles in /tmp or /var/tmp directory. The operating system removes files from these directories periodically. As a result, satellite-installer fails to execute while enabling features or upgrading Satellite Server. Procedure Update certificates on your Satellite Server: 1 Path to Satellite Server certificate file that is signed by a Certificate Authority. 2 Path to the private key that was used to sign Satellite Server certificate. 3 Path to the Certificate Authority bundle. Verification On a computer with network access to Satellite Server, navigate to the following URL: https://satellite.example.com . In your browser, view the certificate details to verify the deployed certificate. 4.11.3. Deploying a custom SSL certificate to hosts After you configure Satellite to use a custom SSL certificate, you must deploy the certificate to hosts registered to Satellite. Procedure Update the SSL certificate on each host: 4.12. Resetting custom SSL certificate to default self-signed certificate on Satellite Server Procedure Reset the custom SSL certificate to default self-signed certificate: Verification Verify that the following parameters in /etc/foreman-installer/scenarios.d/satellite-answers.yaml have no values: server_cert: server_key: server_cert_req: server_ca_cert: Additional resources Resetting custom SSL certificate to default self-signed certificate on Capsule Server in Installing Capsule Server . Resetting custom SSL certificate to default self-signed certificate on hosts in Managing hosts . 4.13. Using external databases with Satellite As part of the installation process for Red Hat Satellite, the satellite-installer command installs PostgreSQL databases on the same server as Satellite. In certain Satellite deployments, using external databases instead of the default local databases can help with the server load. Red Hat does not provide support or tools for external database maintenance. This includes backups, upgrades, and database tuning. You must have your own database administrator to support and maintain external databases. To create and use external databases for Satellite, you must complete the following procedures: Section 4.13.2, "Preparing a host for external databases" . Prepare a host for the external databases. Section 4.13.3, "Installing PostgreSQL" . Prepare PostgreSQL with databases for Satellite, Candlepin and Pulp with dedicated users owning them. Section 4.13.4, "Configuring Satellite Server to use external databases" . Edit the parameters of satellite-installer to point to the new databases, and run satellite-installer . 4.13.1. PostgreSQL as an external database considerations Foreman, Katello, and Candlepin use the PostgreSQL database. If you want to use PostgreSQL as an external database, the following information can help you decide if this option is right for your Satellite configuration. Satellite supports PostgreSQL version 13. Advantages of external PostgreSQL Increase in free memory and free CPU on Satellite Flexibility to set shared_buffers on the PostgreSQL database to a high number without the risk of interfering with other services on Satellite Flexibility to tune the PostgreSQL server's system without adversely affecting Satellite operations Disadvantages of external PostgreSQL Increase in deployment complexity that can make troubleshooting more difficult The external PostgreSQL server is an additional system to patch and maintain If either Satellite or the PostgreSQL database server suffers a hardware or storage failure, Satellite is not operational If there is latency between the Satellite server and database server, performance can suffer If you suspect that the PostgreSQL database on your Satellite is causing performance problems, use the information in Satellite 6: How to enable postgres query logging to detect slow running queries to determine if you have slow queries. Queries that take longer than one second are typically caused by performance issues with large installations, and moving to an external database might not help. If you have slow queries, contact Red Hat Support. 4.13.2. Preparing a host for external databases Install a freshly provisioned system with the latest Red Hat Enterprise Linux 9 or Red Hat Enterprise Linux 8 to host the external databases. Subscriptions for Red Hat Enterprise Linux do not provide the correct service level agreement for using Satellite with external databases. You must also attach a Satellite subscription to the base operating system that you want to use for the external databases. Prerequisites The prepared host must meet Satellite's Storage Requirements . You must attach a Satellite subscription to your server. For more information about subscription, see Attaching the Satellite Infrastructure Subscription in Installing Satellite Server in a connected network environment . Procedure Select the operating system and version you are installing external database on: Red Hat Enterprise Linux 9 Red Hat Enterprise Linux 8 4.13.2.1. Red Hat Enterprise Linux 9 Disable all repositories: Enable the following repositories: Verification Verify that the required repositories are enabled: 4.13.2.2. Red Hat Enterprise Linux 8 Disable all repositories: Enable the following repositories: Enable the following module: Note Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . Verification Verify that the required repositories are enabled: 4.13.3. Installing PostgreSQL You can install only the same version of PostgreSQL that is installed with the satellite-installer tool during an internal database installation. Satellite supports PostgreSQL version 12. Procedure To install PostgreSQL, enter the following command: To initialize PostgreSQL, enter the following command: Edit the /var/lib/pgsql/data/postgresql.conf file: Note that the default configuration of external PostgreSQL needs to be adjusted to work with Satellite. The base recommended external database configuration adjustments are as follows: checkpoint_completion_target: 0.9 max_connections: 500 shared_buffers: 512MB work_mem: 4MB Remove the # and edit to listen to inbound connections: Add the following line to the end of the file to use SCRAM for authentication: Edit the /var/lib/pgsql/data/pg_hba.conf file: Add the following line to the file: To start, and enable PostgreSQL service, enter the following commands: Open the postgresql port on the external PostgreSQL server: Make the changes persistent: Switch to the postgres user and start the PostgreSQL client: Create three databases and dedicated roles: one for Satellite, one for Candlepin, and one for Pulp: Connect to the Pulp database: Create the hstore extension: Exit the postgres user: From Satellite Server, test that you can access the database. If the connection succeeds, the commands return 1 . 4.13.4. Configuring Satellite Server to use external databases Use the satellite-installer command to configure Satellite to connect to an external PostgreSQL database. Prerequisites You have installed and configured a PostgreSQL database on a Red Hat Enterprise Linux server. Procedure To configure the external databases for Satellite, enter the following command: To enable the Secure Sockets Layer (SSL) protocol for these external databases, add the following options: | [
"satellite-maintain packages install insights-client",
"satellite-installer --register-with-insights",
"insights-client --unregister",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt",
"firewall-cmd --add-service=mqtt",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages update grub2-efi",
"unset http_proxy https_proxy no_proxy",
"hammer http-proxy create --name= My_HTTP_Proxy --username= My_HTTP_Proxy_User_Name --password= My_HTTP_Proxy_Password --url http:// http-proxy.example.com :8080",
"hammer settings set --name=content_default_http_proxy --value= My_HTTP_Proxy",
"semanage port -l | grep http_cache http_cache_port_t tcp 8080, 8118, 8123, 10001-10010 [output truncated]",
"semanage port -a -t http_cache_port_t -p tcp 8088",
"hammer settings set --name=http_proxy --value= Proxy_URL",
"hammer settings set --name=http_proxy_except_list --value=[ hostname1.hostname2... ]",
"hammer settings set --name=content_default_http_proxy --value=\"\"",
"satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"",
"satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3",
"satellite-installer --foreman-proxy-dhcp false --foreman-proxy-dns false --foreman-proxy-tftp false",
"Option 66: IP address of Satellite or Capsule Option 67: /pxelinux.0",
"cp mailca.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust enable update-ca-trust",
"satellite-installer --certs-cname alternate_fqdn --certs-update-server",
"./bootstrap.py --server alternate_fqdn.example.com",
"Server hostname: hostname = alternate_fqdn.example.com content omitted Content base URL: baseurl=https:// alternate_fqdn.example.com /pulp/content/",
"mkdir /root/satellite_cert",
"openssl genrsa -out /root/satellite_cert/satellite_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = satellite.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = satellite.example.com",
"[req_distinguished_name] CN = satellite.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4",
"openssl req -new -key /root/satellite_cert/satellite_cert_key.pem \\ 1 -config /root/satellite_cert/openssl.cnf \\ 2 -out /root/satellite_cert/satellite_cert_csr.pem 3",
"satellite-installer --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" \\ 1 --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" \\ 2 --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" \\ 3 --certs-update-server --certs-update-server-ca",
"dnf install http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"satellite-installer --certs-reset",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=satellite-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf repolist enabled",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=satellite-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"dnf module enable satellite:el8",
"dnf repolist enabled",
"dnf install postgresql-server postgresql-evr postgresql-contrib",
"postgresql-setup initdb",
"vi /var/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"password_encryption=scram-sha-256",
"vi /var/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /32 scram-sha-256",
"systemctl enable --now postgresql",
"firewall-cmd --add-service=postgresql",
"firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".",
"pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"satellite-installer --katello-candlepin-manage-db false --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-user candlepin --katello-candlepin-db-password Candlepin_Password --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-user pulp --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-db-manage false --foreman-db-host postgres.example.com --foreman-db-database foreman --foreman-db-username foreman --foreman-db-password Foreman_Password",
"--foreman-db-root-cert <path_to_CA> --foreman-db-sslmode verify-full --foreman-proxy-content-pulpcore-postgresql-ssl true --foreman-proxy-content-pulpcore-postgresql-ssl-root-ca <path_to_CA> --katello-candlepin-db-ssl true --katello-candlepin-db-ssl-ca <path_to_CA> --katello-candlepin-db-ssl-verify true"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_connected_network_environment/performing-additional-configuration-on-server_satellite |
5.2.20. /proc/misc | 5.2.20. /proc/misc This file lists miscellaneous drivers registered on the miscellaneous major device, which is device number 10: The first column is the minor number of each device, while the second column shows the driver in use. | [
"63 device-mapper 175 agpgart 135 rtc 134 apm_bios"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-misc |
Chapter 3. Developer CLI (odo) | Chapter 3. Developer CLI (odo) 3.1. odo release notes 3.1.1. Notable changes and improvements in odo version 2.5.0 Creates unique routes for each component, using adler32 hashing Supports additional fields in the devfile for assigning resources: cpuRequest cpuLimit memoryRequest memoryLimit Adds the --deploy flag to the odo delete command, to remove components deployed using the odo deploy command: USD odo delete --deploy Adds mapping support to the odo link command Supports ephemeral volumes using the ephemeral field in volume components Sets the default answer to yes when asking for telemetry opt-in Improves metrics by sending additional telemetry data to the devfile registry Updates the bootstrap image to registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.11 The upstream repository is available at https://github.com/redhat-developer/odo 3.1.2. Bug fixes Previously, odo deploy would fail if the .odo/env file did not exist. The command now creates the .odo/env file if required. Previously, interactive component creation using the odo create command would fail if disconnect from the cluster. This issue is fixed in the latest release. 3.1.3. Getting support For Product If you find an error, encounter a bug, or have suggestions for improving the functionality of odo , file an issue in Bugzilla . Choose OpenShift Developer Tools and Services as a product type and odo as a component. Provide as many details in the issue description as possible. For Documentation If you find an error or have suggestions for improving the documentation, file a Jira issue for the most relevant documentation component. 3.2. Understanding odo Red Hat OpenShift Developer CLI ( odo ) is a tool for creating applications on OpenShift Container Platform and Kubernetes. With odo , you can develop, test, debug, and deploy microservices-based applications on a Kubernetes cluster without having a deep understanding of the platform. odo follows a create and push workflow. As a user, when you create , the information (or manifest) is stored in a configuration file. When you push , the corresponding resources are created on the Kubernetes cluster. All of this configuration is stored in the Kubernetes API for seamless accessibility and functionality. odo uses service and link commands to link components and services together. odo achieves this by creating and deploying services based on Kubernetes Operators in the cluster. Services can be created using any of the Operators available on the Operator Hub. After linking a service, odo injects the service configuration into the component. Your application can then use this configuration to communicate with the Operator-backed service. 3.2.1. odo key features odo is designed to be a developer-friendly interface to Kubernetes, with the ability to: Quickly deploy applications on a Kubernetes cluster by creating a new manifest or using an existing one Use commands to easily create and update the manifest, without the need to understand and maintain Kubernetes configuration files Provide secure access to applications running on a Kubernetes cluster Add and remove additional storage for applications on a Kubernetes cluster Create Operator-backed services and link your application to them Create a link between multiple microservices that are deployed as odo components Remotely debug applications you deployed using odo in your IDE Easily test applications deployed on Kubernetes using odo 3.2.2. odo core concepts odo abstracts Kubernetes concepts into terminology that is familiar to developers: Application A typical application, developed with a cloud-native approach , that is used to perform a particular task. Examples of applications include online video streaming, online shopping, and hotel reservation systems. Component A set of Kubernetes resources that can run and be deployed separately. A cloud-native application is a collection of small, independent, loosely coupled components . Examples of components include an API back-end, a web interface, and a payment back-end. Project A single unit containing your source code, tests, and libraries. Context A directory that contains the source code, tests, libraries, and odo config files for a single component. URL A mechanism to expose a component for access from outside the cluster. Storage Persistent storage in the cluster. It persists the data across restarts and component rebuilds. Service An external application that provides additional functionality to a component. Examples of services include PostgreSQL, MySQL, Redis, and RabbitMQ. In odo , services are provisioned from the OpenShift Service Catalog and must be enabled within your cluster. devfile An open standard for defining containerized development environments that enables developer tools to simplify and accelerate workflows. For more information, see the documentation at https://devfile.io . You can connect to publicly available devfile registries, or you can install a Secure Registry. 3.2.3. Listing components in odo odo uses the portable devfile format to describe components and their related URLs, storage, and services. odo can connect to various devfile registries to download devfiles for different languages and frameworks. See the documentation for the odo registry command for more information on how to manage the registries used by odo to retrieve devfile information. You can list all the devfiles available of the different registries with the odo catalog list components command. Procedure Log in to the cluster with odo : USD odo login -u developer -p developer List the available odo components: USD odo catalog list components Example output Odo Devfile Components: NAME DESCRIPTION REGISTRY dotnet50 Stack with .NET 5.0 DefaultDevfileRegistry dotnet60 Stack with .NET 6.0 DefaultDevfileRegistry dotnetcore31 Stack with .NET Core 3.1 DefaultDevfileRegistry go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry java-openliberty Java application Maven-built stack using the Open Liberty ru... DefaultDevfileRegistry java-openliberty-gradle Java application Gradle-built stack using the Open Liberty r... DefaultDevfileRegistry java-quarkus Quarkus with Java DefaultDevfileRegistry java-springboot Spring Boot(R) using Java DefaultDevfileRegistry java-vertx Upstream Vert.x using Java DefaultDevfileRegistry java-websphereliberty Java application Maven-built stack using the WebSphere Liber... DefaultDevfileRegistry java-websphereliberty-gradle Java application Gradle-built stack using the WebSphere Libe... DefaultDevfileRegistry java-wildfly Upstream WildFly DefaultDevfileRegistry java-wildfly-bootable-jar Java stack with WildFly in bootable Jar mode, OpenJDK 11 and... DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry nodejs-angular Stack with Angular 12 DefaultDevfileRegistry nodejs-nextjs Stack with .js 11 DefaultDevfileRegistry nodejs-nuxtjs Stack with Nuxt.js 2 DefaultDevfileRegistry nodejs-react Stack with React 17 DefaultDevfileRegistry nodejs-svelte Stack with Svelte 3 DefaultDevfileRegistry nodejs-vue Stack with Vue 3 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry python-django Python3.7 with Django DefaultDevfileRegistry 3.2.4. Telemetry in odo odo collects information about how it is being used, including metrics on the operating system, RAM, CPU, number of cores, odo version, errors, success/failures, and how long odo commands take to complete. You can modify your telemetry consent by using the odo preference command: odo preference set ConsentTelemetry true consents to telemetry. odo preference unset ConsentTelemetry disables telemetry. odo preference view shows the current preferences. 3.3. Installing odo You can install the odo CLI on Linux, Windows, or macOS by downloading a binary. You can also install the OpenShift VS Code extension, which uses both the odo and the oc binaries to interact with your OpenShift Container Platform cluster. For Red Hat Enterprise Linux (RHEL), you can install the odo CLI as an RPM. Note Currently, odo does not support installation in a restricted network environment. 3.3.1. Installing odo on Linux The odo CLI is available to download as a binary and as a tarball for multiple operating systems and architectures including: Operating System Binary Tarball Linux odo-linux-amd64 odo-linux-amd64.tar.gz Linux on IBM Power odo-linux-ppc64le odo-linux-ppc64le.tar.gz Linux on IBM Z and LinuxONE odo-linux-s390x odo-linux-s390x.tar.gz Procedure Navigate to the content gateway and download the appropriate file for your operating system and architecture. If you download the binary, rename it to odo : USD curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o odo If you download the tarball, extract the binary: USD curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz -o odo.tar.gz USD tar xvzf odo.tar.gz Change the permissions on the binary: USD chmod +x <filename> Place the odo binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verify that odo is now available on your system: USD odo version 3.3.2. Installing odo on Windows The odo CLI for Windows is available to download as a binary and as an archive. Operating System Binary Tarball Windows odo-windows-amd64.exe odo-windows-amd64.exe.zip Procedure Navigate to the content gateway and download the appropriate file: If you download the binary, rename it to odo.exe . If you download the archive, unzip the binary with a ZIP program and then rename it to odo.exe . Move the odo.exe binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verify that odo is now available on your system: C:\> odo version 3.3.3. Installing odo on macOS The odo CLI for macOS is available to download as a binary and as a tarball. Operating System Binary Tarball macOS odo-darwin-amd64 odo-darwin-amd64.tar.gz Procedure Navigate to the content gateway and download the appropriate file: If you download the binary, rename it to odo : USD curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o odo If you download the tarball, extract the binary: USD curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.tar.gz -o odo.tar.gz USD tar xvzf odo.tar.gz Change the permissions on the binary: # chmod +x odo Place the odo binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verify that odo is now available on your system: USD odo version 3.3.4. Installing odo on VS Code The OpenShift VS Code extension uses both odo and the oc binary to interact with your OpenShift Container Platform cluster. To work with these features, install the OpenShift VS Code extension on VS Code. Prerequisites You have installed VS Code. Procedure Open VS Code. Launch VS Code Quick Open with Ctrl + P . Enter the following command: 3.3.5. Installing odo on Red Hat Enterprise Linux (RHEL) using an RPM For Red Hat Enterprise Linux (RHEL), you can install the odo CLI as an RPM. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift Developer Tools and Services*' In the output of the command, find the Pool ID field for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by odo : # subscription-manager repos --enable="ocp-tools-4.9-for-rhel-8-x86_64-rpms" Install the odo package: # yum install odo Verify that odo is now available on your system: USD odo version 3.4. Configuring the odo CLI You can find the global settings for odo in the preference.yaml file which is located by default in your USDHOME/.odo directory. You can set a different location for the preference.yaml file by exporting the GLOBALODOCONFIG variable. 3.4.1. Viewing the current configuration You can view the current odo CLI configuration by using the following command: USD odo preference view Example output PARAMETER CURRENT_VALUE UpdateNotification NamePrefix Timeout BuildTimeout PushTimeout Ephemeral ConsentTelemetry true 3.4.2. Setting a value You can set a value for a preference key by using the following command: USD odo preference set <key> <value> Note Preference keys are case-insensitive. Example command USD odo preference set updatenotification false Example output Global preference was successfully updated 3.4.3. Unsetting a value You can unset a value for a preference key by using the following command: USD odo preference unset <key> Note You can use the -f flag to skip the confirmation. Example command USD odo preference unset updatenotification ? Do you want to unset updatenotification in the preference (y/N) y Example output Global preference was successfully updated 3.4.4. Preference key table The following table shows the available options for setting preference keys for the odo CLI: Preference key Description Default value UpdateNotification Control whether a notification to update odo is shown. True NamePrefix Set a default name prefix for an odo resource. For example, component or storage . Current directory name Timeout Timeout for the Kubernetes server connection check. 1 second BuildTimeout Timeout for waiting for a build of the git component to complete. 300 seconds PushTimeout Timeout for waiting for a component to start. 240 seconds Ephemeral Controls whether odo should create an emptyDir volume to store source code. True ConsentTelemetry Controls whether odo can collect telemetry for the user's odo usage. False 3.4.5. Ignoring files or patterns You can configure a list of files or patterns to ignore by modifying the .odoignore file in the root directory of your application. This applies to both odo push and odo watch . If the .odoignore file does not exist, the .gitignore file is used instead for ignoring specific files and folders. To ignore .git files, any files with the .js extension, and the folder tests , add the following to either the .odoignore or the .gitignore file: The .odoignore file allows any glob expressions. 3.5. odo CLI reference 3.5.1. odo build-images odo can build container images based on Dockerfiles, and push these images to their registries. When running the odo build-images command, odo searches for all components in the devfile.yaml with the image type, for example: components: - image: imageName: quay.io/myusername/myimage dockerfile: uri: ./Dockerfile 1 buildContext: USD{PROJECTS_ROOT} 2 name: component-built-from-dockerfile 1 The uri field indicates the relative path of the Dockerfile to use, relative to the directory containing the devfile.yaml . The devfile specification indicates that uri could also be an HTTP URL, but this case is not supported by odo yet. 2 The buildContext indicates the directory used as build context. The default value is USD{PROJECTS_ROOT} . For each image component, odo executes either podman or docker (the first one found, in this order), to build the image with the specified Dockerfile, build context, and arguments. If the --push flag is passed to the command, the images are pushed to their registries after they are built. 3.5.2. odo catalog odo uses different catalogs to deploy components and services . 3.5.2.1. Components odo uses the portable devfile format to describe the components. It can connect to various devfile registries to download devfiles for different languages and frameworks. See odo registry for more information. 3.5.2.1.1. Listing components To list all the devfiles available on the different registries, run the command: USD odo catalog list components Example output NAME DESCRIPTION REGISTRY go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry [...] 3.5.2.1.2. Getting information about a component To get more information about a specific component, run the command: USD odo catalog describe component For example, run the command: USD odo catalog describe component nodejs Example output * Registry: DefaultDevfileRegistry 1 Starter Projects: 2 --- name: nodejs-starter attributes: {} description: "" subdir: "" projectsource: sourcetype: "" git: gitlikeprojectsource: commonprojectsource: {} checkoutfrom: null remotes: origin: https://github.com/odo-devfiles/nodejs-ex.git zip: null custom: null 1 Registry is the registry from which the devfile is retrieved. 2 Starter projects are sample projects in the same language and framework of the devfile, that can help you start a new project. See odo create for more information on creating a project from a starter project. 3.5.2.2. Services odo can deploy services with the help of Operators . Only Operators deployed with the help of the Operator Lifecycle Manager are supported by odo. 3.5.2.2.1. Listing services To list the available Operators and their associated services, run the command: USD odo catalog list services Example output Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database redis-operator.v0.8.0 RedisCluster, Redis In this example, two Operators are installed in the cluster. The postgresql-operator.v0.1.1 Operator deploys services related to PostgreSQL: Backup and Database . The redis-operator.v0.8.0 Operator deploys services related to Redis: RedisCluster and Redis . Note To get a list of all the available Operators, odo fetches the ClusterServiceVersion (CSV) resources of the current namespace that are in a Succeeded phase. For Operators that support cluster-wide access, when a new namespace is created, these resources are automatically added to it. However, it may take some time before they are in the Succeeded phase, and odo may return an empty list until the resources are ready. 3.5.2.2.2. Searching services To search for a specific service by a keyword, run the command: USD odo catalog search service For example, to retrieve the PostgreSQL services, run the command: USD odo catalog search service postgres Example output Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database You will see a list of Operators that contain the searched keyword in their name. 3.5.2.2.3. Getting information about a service To get more information about a specific service, run the command: USD odo catalog describe service For example: USD odo catalog describe service postgresql-operator.v0.1.1/Database Example output KIND: Database VERSION: v1alpha1 DESCRIPTION: Database is the Schema for the the Database Database API FIELDS: awsAccessKeyId (string) AWS S3 accessKey/token ID Key ID of AWS S3 storage. Default Value: nil Required to create the Secret with the data to allow send the backup files to AWS S3 storage. [...] A service is represented in the cluster by a CustomResourceDefinition (CRD) resource. The command displays the details about the CRD such as kind , version , and the list of fields available to define an instance of this custom resource. The list of fields is extracted from the OpenAPI schema included in the CRD. This information is optional in a CRD, and if it is not present, it is extracted from the ClusterServiceVersion (CSV) resource representing the service instead. It is also possible to request the description of an Operator-backed service, without providing CRD type information. To describe the Redis Operator on a cluster, without CRD, run the following command: USD odo catalog describe service redis-operator.v0.8.0 Example output NAME: redis-operator.v0.8.0 DESCRIPTION: A Golang based redis operator that will make/oversee Redis standalone/cluster mode setup on top of the Kubernetes. It can create a redis cluster setup with best practices on Cloud as well as the Bare metal environment. Also, it provides an in-built monitoring capability using ... (cut short for beverity) Logging Operator is licensed under [Apache License, Version 2.0](https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/LICENSE) CRDs: NAME DESCRIPTION RedisCluster Redis Cluster Redis Redis 3.5.3. odo create odo uses a devfile to store the configuration of a component and to describe the component's resources such as storage and services. The odo create command generates this file. 3.5.3.1. Creating a component To create a devfile for an existing project, run the odo create command with the name and type of your component (for example, nodejs or go ): odo create nodejs mynodejs In the example, nodejs is the type of the component and mynodejs is the name of the component that odo creates for you. Note For a list of all the supported component types, run the command odo catalog list components . If your source code exists outside the current directory, the --context flag can be used to specify the path. For example, if the source for the nodejs component is in a folder called node-backend relative to the current working directory, run the command: odo create nodejs mynodejs --context ./node-backend The --context flag supports relative and absolute paths. To specify the project or app where your component will be deployed, use the --project and --app flags. For example, to create a component that is part of the myapp app inside the backend project, run the command: odo create nodejs --app myapp --project backend Note If these flags are not specified, they will default to the active app and project. 3.5.3.2. Starter projects Use the starter projects if you do not have existing source code but want to get up and running quickly to experiment with devfiles and components. To use a starter project, add the --starter flag to the odo create command. To get a list of available starter projects for a component type, run the odo catalog describe component command. For example, to get all available starter projects for the nodejs component type, run the command: odo catalog describe component nodejs Then specify the desired project using the --starter flag on the odo create command: odo create nodejs --starter nodejs-starter This will download the example template corresponding to the chosen component type, in this instance, nodejs . The template is downloaded to your current directory, or to the location specified by the --context flag. If a starter project has its own devfile, then this devfile will be preserved. 3.5.3.3. Using an existing devfile If you want to create a new component from an existing devfile, you can do so by specifying the path to the devfile using the --devfile flag. For example, to create a component called mynodejs , based on a devfile from GitHub, use the following command: odo create mynodejs --devfile https://raw.githubusercontent.com/odo-devfiles/registry/master/devfiles/nodejs/devfile.yaml 3.5.3.4. Interactive creation You can also run the odo create command interactively, to guide you through the steps needed to create a component: USD odo create ? Which devfile component type do you wish to create go ? What do you wish to name the new devfile component go-api ? What project do you want the devfile component to be created in default Devfile Object Validation [✓] Checking devfile existence [164258ns] [✓] Creating a devfile component from registry: DefaultDevfileRegistry [246051ns] Validation [✓] Validating if devfile name is correct [92255ns] ? Do you want to download a starter project Yes Starter Project [✓] Downloading starter project go-starter from https://github.com/devfile-samples/devfile-stack-go.git [429ms] Please use odo push command to create the component with source deployed You are prompted to choose the component type, name, and the project for the component. You can also choose whether or not to download a starter project. Once finished, a new devfile.yaml file is created in the working directory. To deploy these resources to your cluster, run the command odo push . 3.5.4. odo delete The odo delete command is useful for deleting resources that are managed by odo . 3.5.4.1. Deleting a component To delete a devfile component, run the odo delete command: USD odo delete If the component has been pushed to the cluster, the component is deleted from the cluster, along with its dependent storage, URL, secrets, and other resources. If the component has not been pushed, the command exits with an error stating that it could not find the resources on the cluster. Use the -f or --force flag to avoid the confirmation questions. 3.5.4.2. Undeploying devfile Kubernetes components To undeploy the devfile Kubernetes components, that have been deployed with odo deploy , execute the odo delete command with the --deploy flag: USD odo delete --deploy Use the -f or --force flag to avoid the confirmation questions. 3.5.4.3. Delete all To delete all artifacts including the following items, run the odo delete command with the --all flag : devfile component Devfile Kubernetes component that was deployed using the odo deploy command Devfile Local configuration USD odo delete --all 3.5.4.4. Available flags -f , --force Use this flag to avoid the confirmation questions. -w , --wait Use this flag to wait for component deletion and any dependencies. This flag does not work when undeploying. The documentation on Common Flags provides more information on the flags available for commands. 3.5.5. odo deploy odo can be used to deploy components in a manner similar to how they would be deployed using a CI/CD system. First, odo builds the container images, and then it deploys the Kubernetes resources required to deploy the components. When running the command odo deploy , odo searches for the default command of kind deploy in the devfile, and executes this command. The kind deploy is supported by the devfile format starting from version 2.2.0. The deploy command is typically a composite command, composed of several apply commands: A command referencing an image component that, when applied, will build the image of the container to deploy, and then push it to its registry. A command referencing a Kubernetes component that, when applied, will create a Kubernetes resource in the cluster. With the following example devfile.yaml file, a container image is built using the Dockerfile present in the directory. The image is pushed to its registry and then a Kubernetes Deployment resource is created in the cluster, using this freshly built image. schemaVersion: 2.2.0 [...] variables: CONTAINER_IMAGE: quay.io/phmartin/myimage commands: - id: build-image apply: component: outerloop-build - id: deployk8s apply: component: outerloop-deploy - id: deploy composite: commands: - build-image - deployk8s group: kind: deploy isDefault: true components: - name: outerloop-build image: imageName: "{{CONTAINER_IMAGE}}" dockerfile: uri: ./Dockerfile buildContext: USD{PROJECTS_ROOT} - name: outerloop-deploy kubernetes: inlined: | kind: Deployment apiVersion: apps/v1 metadata: name: my-component spec: replicas: 1 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: main image: {{CONTAINER_IMAGE}} 3.5.6. odo link The odo link command helps link an odo component to an Operator-backed service or another odo component. It does this by using the Service Binding Operator . Currently, odo makes use of the Service Binding library and not the Operator itself to achieve the desired functionality. 3.5.6.1. Various linking options odo provides various options for linking a component with an Operator-backed service or another odo component. All these options (or flags) can be used whether you are linking a component to a service or to another component. 3.5.6.1.1. Default behavior By default, the odo link command creates a directory named kubernetes/ in your component directory and stores the information (YAML manifests) about services and links there. When you use odo push , odo compares these manifests with the state of the resources on the Kubernetes cluster and decides whether it needs to create, modify or destroy resources to match what is specified by the user. 3.5.6.1.2. The --inlined flag If you specify the --inlined flag to the odo link command, odo stores the link information inline in the devfile.yaml in the component directory, instead of creating a file under the kubernetes/ directory. The behavior of the --inlined flag is similar in both the odo link and odo service create commands. This flag is helpful if you want everything stored in a single devfile.yaml . You have to remember to use --inlined flag with each odo link and odo service create command that you execute for the component. 3.5.6.1.3. The --map flag Sometimes, you might want to add more binding information to the component, in addition to what is available by default. For example, if you are linking the component with a service and would like to bind some information from the service's spec (short for specification), you could use the --map flag. Note that odo does not do any validation against the spec of the service or component being linked. Using this flag is only recommended if you are comfortable using the Kubernetes YAML manifests. 3.5.6.1.4. The --bind-as-files flag For all the linking options discussed so far, odo injects the binding information into the component as environment variables. If you would like to mount this information as files instead, you can use the --bind-as-files flag. This will make odo inject the binding information as files into the /bindings location within your component's Pod. Compared to the environment variables scenario, when you use --bind-as-files , the files are named after the keys and the value of these keys is stored as the contents of these files. 3.5.6.2. Examples 3.5.6.2.1. Default odo link In the following example, the backend component is linked with the PostgreSQL service using the default odo link command. For the backend component, make sure that your component and service are pushed to the cluster: USD odo list Sample output APP NAME PROJECT TYPE STATE MANAGED BY ODO app backend myproject spring Pushed Yes USD odo service list Sample output NAME MANAGED BY ODO STATE AGE PostgresCluster/hippo Yes (backend) Pushed 59m41s Now, run odo link to link the backend component with the PostgreSQL service: USD odo link PostgresCluster/hippo Example output [✓] Successfully created link between component "backend" and service "PostgresCluster/hippo" To apply the link, please use `odo push` And then run odo push to actually create the link on the Kubernetes cluster. After a successful odo push , you will see a few outcomes: When you open the URL for the application deployed by backend component, it shows a list of todo items in the database. For example, in the output for the odo url list command, the path where todos are listed is included: USD odo url list Sample output Found the following URLs for component backend NAME STATE URL PORT SECURE KIND 8080-tcp Pushed http://8080-tcp.192.168.39.112.nip.io 8080 false ingress The correct path for the URL would be http://8080-tcp.192.168.39.112.nip.io/api/v1/todos. The exact URL depends on your setup. Also note that there are no todos in the database unless you add some, so the URL might just show an empty JSON object. You can see binding information related to the Postgres service injected into the backend component. This binding information is injected, by default, as environment variables. You can check it using the odo describe command from the backend component's directory: USD odo describe Example output: Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Environment Variables: · POSTGRESCLUSTER_PGBOUNCER-EMPTY · POSTGRESCLUSTER_PGBOUNCER.INI · POSTGRESCLUSTER_ROOT.CRT · POSTGRESCLUSTER_VERIFIER · POSTGRESCLUSTER_ID_ECDSA · POSTGRESCLUSTER_PGBOUNCER-VERIFIER · POSTGRESCLUSTER_TLS.CRT · POSTGRESCLUSTER_PGBOUNCER-URI · POSTGRESCLUSTER_PATRONI.CRT-COMBINED · POSTGRESCLUSTER_USER · pgImage · pgVersion · POSTGRESCLUSTER_CLUSTERIP · POSTGRESCLUSTER_HOST · POSTGRESCLUSTER_PGBACKREST_REPO.CONF · POSTGRESCLUSTER_PGBOUNCER-USERS.TXT · POSTGRESCLUSTER_SSH_CONFIG · POSTGRESCLUSTER_TLS.KEY · POSTGRESCLUSTER_CONFIG-HASH · POSTGRESCLUSTER_PASSWORD · POSTGRESCLUSTER_PATRONI.CA-ROOTS · POSTGRESCLUSTER_DBNAME · POSTGRESCLUSTER_PGBOUNCER-PASSWORD · POSTGRESCLUSTER_SSHD_CONFIG · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.KEY · POSTGRESCLUSTER_PGBACKREST_INSTANCE.CONF · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CA-ROOTS · POSTGRESCLUSTER_PGBOUNCER-HOST · POSTGRESCLUSTER_PORT · POSTGRESCLUSTER_ROOT.KEY · POSTGRESCLUSTER_SSH_KNOWN_HOSTS · POSTGRESCLUSTER_URI · POSTGRESCLUSTER_PATRONI.YAML · POSTGRESCLUSTER_DNS.CRT · POSTGRESCLUSTER_DNS.KEY · POSTGRESCLUSTER_ID_ECDSA.PUB · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CRT · POSTGRESCLUSTER_PGBOUNCER-PORT · POSTGRESCLUSTER_CA.CRT Some of these variables are used in the backend component's src/main/resources/application.properties file so that the Java Spring Boot application can connect to the PostgreSQL database service. Lastly, odo has created a directory called kubernetes/ in your backend component's directory that contains the following files: USD ls kubernetes odo-service-backend-postgrescluster-hippo.yaml odo-service-hippo.yaml These files contain the information (YAML manifests) for two resources: odo-service-hippo.yaml - the Postgres service created using odo service create --from-file ../postgrescluster.yaml command. odo-service-backend-postgrescluster-hippo.yaml - the link created using odo link command. 3.5.6.2.2. Using odo link with the --inlined flag Using the --inlined flag with the odo link command has the same effect as an odo link command without the flag, in that it injects binding information. However, the subtle difference is that in the above case, there are two manifest files under kubernetes/ directory, one for the Postgres service and another for the link between the backend component and this service. However, when you pass the --inlined flag, odo does not create a file under the kubernetes/ directory to store the YAML manifest, but rather stores it inline in the devfile.yaml file. To see this, unlink the component from the PostgreSQL service first: USD odo unlink PostgresCluster/hippo Example output: [✓] Successfully unlinked component "backend" from service "PostgresCluster/hippo" To apply the changes, please use `odo push` To unlink them on the cluster, run odo push . Now if you inspect the kubernetes/ directory, you see only one file: USD ls kubernetes odo-service-hippo.yaml , use the --inlined flag to create a link: USD odo link PostgresCluster/hippo --inlined Example output: [✓] Successfully created link between component "backend" and service "PostgresCluster/hippo" To apply the link, please use `odo push` You need to run odo push for the link to get created on the cluster, like the procedure that omits the --inlined flag. odo stores the configuration in devfile.yaml . In this file, you can see an entry like the following: kubernetes: inlined: | apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: creationTimestamp: null name: backend-postgrescluster-hippo spec: application: group: apps name: backend-app resource: deployments version: v1 bindAsFiles: false detectBindingResources: true services: - group: postgres-operator.crunchydata.com id: hippo kind: PostgresCluster name: hippo version: v1beta1 status: secret: "" name: backend-postgrescluster-hippo Now if you were to run odo unlink PostgresCluster/hippo , odo would first remove the link information from the devfile.yaml , and then a subsequent odo push would delete the link from the cluster. 3.5.6.2.3. Custom bindings odo link accepts the flag --map which can inject custom binding information into the component. Such binding information will be fetched from the manifest of the resource that you are linking to your component. For example, in the context of the backend component and PostgreSQL service, you can inject information from the PostgreSQL service's manifest postgrescluster.yaml file into the backend component. If the name of your PostgresCluster service is hippo (or the output of odo service list , if your PostgresCluster service is named differently), when you want to inject the value of postgresVersion from that YAML definition into your backend component, run the command: USD odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' Note that, if the name of your Postgres service is different from hippo , you will have to specify that in the above command in the place of .hippo in the value for pgVersion . After a link operation, run odo push as usual. Upon successful completion of the push operation, you can run the following command from your backend component directory, to validate if the custom mapping got injected properly: USD odo exec -- env | grep pgVersion Example output: pgVersion=13 Since you might want to inject more than just one piece of custom binding information, odo link accepts multiple key-value pairs of mappings. The only constraint is that these should be specified as --map <key>=<value> . For example, if you want to also inject PostgreSQL image information along with the version, you could run: USD odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}' and then run odo push . To validate if both the mappings got injected correctly, run the following command: USD odo exec -- env | grep -e "pgVersion\|pgImage" Example output: pgVersion=13 pgImage=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0 3.5.6.2.3.1. To inline or not? You can accept the default behavior where odo link generate a manifests file for the link under kubernetes/ directory. Alternatively, you can use the --inlined flag if you prefer to store everything in a single devfile.yaml file. 3.5.6.3. Binding as files Another helpful flag that odo link provides is --bind-as-files . When this flag is passed, the binding information is not injected into the component's Pod as environment variables but is mounted as a filesystem. Ensure that there are no existing links between the backend component and the PostgreSQL service. You could do this by running odo describe in the backend component's directory and check if you see output similar to the following: Linked Services: · PostgresCluster/hippo Unlink the service from the component using: USD odo unlink PostgresCluster/hippo USD odo push 3.5.6.4. --bind-as-files examples 3.5.6.4.1. Using the default odo link By default, odo creates the manifest file under the kubernetes/ directory, for storing the link information. Link the backend component and PostgreSQL service using: USD odo link PostgresCluster/hippo --bind-as-files USD odo push Example odo describe output: USD odo describe Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 · SERVICE_BINDING_ROOT=/bindings · SERVICE_BINDING_ROOT=/bindings Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Files: · /bindings/backend-postgrescluster-hippo/pgbackrest_instance.conf · /bindings/backend-postgrescluster-hippo/user · /bindings/backend-postgrescluster-hippo/ssh_known_hosts · /bindings/backend-postgrescluster-hippo/clusterIP · /bindings/backend-postgrescluster-hippo/password · /bindings/backend-postgrescluster-hippo/patroni.yaml · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-host · /bindings/backend-postgrescluster-hippo/root.key · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.key · /bindings/backend-postgrescluster-hippo/pgbouncer.ini · /bindings/backend-postgrescluster-hippo/uri · /bindings/backend-postgrescluster-hippo/config-hash · /bindings/backend-postgrescluster-hippo/pgbouncer-empty · /bindings/backend-postgrescluster-hippo/port · /bindings/backend-postgrescluster-hippo/dns.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-uri · /bindings/backend-postgrescluster-hippo/root.crt · /bindings/backend-postgrescluster-hippo/ssh_config · /bindings/backend-postgrescluster-hippo/dns.key · /bindings/backend-postgrescluster-hippo/host · /bindings/backend-postgrescluster-hippo/patroni.crt-combined · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.ca-roots · /bindings/backend-postgrescluster-hippo/tls.key · /bindings/backend-postgrescluster-hippo/verifier · /bindings/backend-postgrescluster-hippo/ca.crt · /bindings/backend-postgrescluster-hippo/dbname · /bindings/backend-postgrescluster-hippo/patroni.ca-roots · /bindings/backend-postgrescluster-hippo/pgbackrest_repo.conf · /bindings/backend-postgrescluster-hippo/pgbouncer-port · /bindings/backend-postgrescluster-hippo/pgbouncer-verifier · /bindings/backend-postgrescluster-hippo/id_ecdsa · /bindings/backend-postgrescluster-hippo/id_ecdsa.pub · /bindings/backend-postgrescluster-hippo/pgbouncer-password · /bindings/backend-postgrescluster-hippo/pgbouncer-users.txt · /bindings/backend-postgrescluster-hippo/sshd_config · /bindings/backend-postgrescluster-hippo/tls.crt Everything that was an environment variable in the key=value format in the earlier odo describe output is now mounted as a file. Use the cat command to view the contents of some of these files: Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/password Example output: q({JC:jn^mm/Bw}eu+j.GX{k Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/user Example output: hippo Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/clusterIP Example output: 10.101.78.56 3.5.6.4.2. Using --inlined The result of using --bind-as-files and --inlined together is similar to using odo link --inlined . The manifest of the link gets stored in the devfile.yaml , instead of being stored in a separate file under kubernetes/ directory. Other than that, the odo describe output would be the same as earlier. 3.5.6.4.3. Custom bindings When you pass custom bindings while linking the backend component with the PostgreSQL service, these custom bindings are injected not as environment variables but are mounted as files. For example: USD odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}' --bind-as-files USD odo push These custom bindings get mounted as files instead of being injected as environment variables. To validate that this worked, run the following command: Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/pgVersion Example output: 13 Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/pgImage Example output: registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0 3.5.7. odo registry odo uses the portable devfile format to describe the components. odo can connect to various devfile registries, to download devfiles for different languages and frameworks. You can connect to publicly available devfile registries, or you can install your own Secure Registry . You can use the odo registry command to manage the registries that are used by odo to retrieve devfile information. 3.5.7.1. Listing the registries To list the registries currently contacted by odo , run the command: USD odo registry list Example output: NAME URL SECURE DefaultDevfileRegistry https://registry.devfile.io No DefaultDevfileRegistry is the default registry used by odo; it is provided by the devfile.io project. 3.5.7.2. Adding a registry To add a registry, run the command: USD odo registry add Example output: USD odo registry add StageRegistry https://registry.stage.devfile.io New registry successfully added If you are deploying your own Secure Registry, you can specify the personal access token to authenticate to the secure registry with the --token flag: USD odo registry add MyRegistry https://myregistry.example.com --token <access_token> New registry successfully added 3.5.7.3. Deleting a registry To delete a registry, run the command: USD odo registry delete Example output: USD odo registry delete StageRegistry ? Are you sure you want to delete registry "StageRegistry" Yes Successfully deleted registry Use the --force (or -f ) flag to force the deletion of the registry without confirmation. 3.5.7.4. Updating a registry To update the URL or the personal access token of a registry already registered, run the command: USD odo registry update Example output: USD odo registry update MyRegistry https://otherregistry.example.com --token <other_access_token> ? Are you sure you want to update registry "MyRegistry" Yes Successfully updated registry Use the --force (or -f ) flag to force the update of the registry without confirmation. 3.5.8. odo service odo can deploy services with the help of Operators . The list of available Operators and services available for installation can be found using the odo catalog command. Services are created in the context of a component , so run the odo create command before you deploy services. A service is deployed using two steps: Define the service and store its definition in the devfile. Deploy the defined service to the cluster, using the odo push command. 3.5.8.1. Creating a new service To create a new service, run the command: USD odo service create For example, to create an instance of a Redis service named my-redis-service , you can run the following command: Example output USD odo catalog list services Services available through Operators NAME CRDs redis-operator.v0.8.0 RedisCluster, Redis USD odo service create redis-operator.v0.8.0/Redis my-redis-service Successfully added service to the configuration; do 'odo push' to create service on the cluster This command creates a Kubernetes manifest in the kubernetes/ directory, containing the definition of the service, and this file is referenced from the devfile.yaml file. USD cat kubernetes/odo-service-my-redis-service.yaml Example output apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi Example command USD cat devfile.yaml Example output [...] components: - kubernetes: uri: kubernetes/odo-service-my-redis-service.yaml name: my-redis-service [...] Note that the name of the created instance is optional. If you do not provide a name, it will be the lowercase name of the service. For example, the following command creates an instance of a Redis service named redis : USD odo service create redis-operator.v0.8.0/Redis 3.5.8.1.1. Inlining the manifest By default, a new manifest is created in the kubernetes/ directory, referenced from the devfile.yaml file. It is possible to inline the manifest inside the devfile.yaml file using the --inlined flag: USD odo service create redis-operator.v0.8.0/Redis my-redis-service --inlined Successfully added service to the configuration; do 'odo push' to create service on the cluster Example command USD cat devfile.yaml Example output [...] components: - kubernetes: inlined: | apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: my-redis-service [...] 3.5.8.1.2. Configuring the service Without specific customization, the service will be created with a default configuration. You can use either command-line arguments or a file to specify your own configuration. 3.5.8.1.2.1. Using command-line arguments Use the --parameters (or -p ) flag to specify your own configuration. The following example configures the Redis service with three parameters: USD odo service create redis-operator.v0.8.0/Redis my-redis-service \ -p kubernetesConfig.image=quay.io/opstree/redis:v6.2.5 \ -p kubernetesConfig.serviceType=ClusterIP \ -p redisExporter.image=quay.io/opstree/redis-exporter:1.0 Successfully added service to the configuration; do 'odo push' to create service on the cluster Example command USD cat kubernetes/odo-service-my-redis-service.yaml Example output apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0 You can obtain the possible parameters for a specific service using the odo catalog describe service command. 3.5.8.1.2.2. Using a file Use a YAML manifest to configure your own specification. In the following example, the Redis service is configured with three parameters. Create a manifest: USD cat > my-redis.yaml <<EOF apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0 EOF Create the service from the manifest: USD odo service create --from-file my-redis.yaml Successfully added service to the configuration; do 'odo push' to create service on the cluster 3.5.8.2. Deleting a service To delete a service, run the command: USD odo service delete Example output USD odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service Yes (api) Deleted locally 5m39s USD odo service delete Redis/my-redis-service ? Are you sure you want to delete Redis/my-redis-service Yes Service "Redis/my-redis-service" has been successfully deleted; do 'odo push' to delete service from the cluster Use the --force (or -f ) flag to force the deletion of the service without confirmation. 3.5.8.3. Listing services To list the services created for your component, run the command: USD odo service list Example output USD odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service-1 Yes (api) Not pushed Redis/my-redis-service-2 Yes (api) Pushed 52s Redis/my-redis-service-3 Yes (api) Deleted locally 1m22s For each service, STATE indicates if the service has been pushed to the cluster using the odo push command, or if the service is still running on the cluster but removed from the devfile locally using the odo service delete command. 3.5.8.4. Getting information about a service To get details of a service such as its kind, version, name, and list of configured parameters, run the command: USD odo service describe Example output USD odo service describe Redis/my-redis-service Version: redis.redis.opstreelabs.in/v1beta1 Kind: Redis Name: my-redis-service Parameters: NAME VALUE kubernetesConfig.image quay.io/opstree/redis:v6.2.5 kubernetesConfig.serviceType ClusterIP redisExporter.image quay.io/opstree/redis-exporter:1.0 3.5.9. odo storage odo lets users manage storage volumes that are attached to the components. A storage volume can be either an ephemeral volume using an emptyDir Kubernetes volume, or a Persistent Volume Claim (PVC). A PVC allows users to claim a persistent volume (such as a GCE PersistentDisk or an iSCSI volume) without understanding the details of the particular cloud environment. The persistent storage volume can be used to persist data across restarts and rebuilds of the component. 3.5.9.1. Adding a storage volume To add a storage volume to the cluster, run the command: USD odo storage create Example output: USD odo storage create store --path /data --size 1Gi [✓] Added storage store to nodejs-project-ufyy USD odo storage create tempdir --path /tmp --size 2Gi --ephemeral [✓] Added storage tempdir to nodejs-project-ufyy Please use `odo push` command to make the storage accessible to the component In the above example, the first storage volume has been mounted to the /data path and has a size of 1Gi , and the second volume has been mounted to /tmp and is ephemeral. 3.5.9.2. Listing the storage volumes To check the storage volumes currently used by the component, run the command: USD odo storage list Example output: USD odo storage list The component 'nodejs-project-ufyy' has the following storage attached: NAME SIZE PATH STATE store 1Gi /data Not Pushed tempdir 2Gi /tmp Not Pushed 3.5.9.3. Deleting a storage volume To delete a storage volume, run the command: USD odo storage delete Example output: USD odo storage delete store -f Deleted storage store from nodejs-project-ufyy Please use `odo push` command to delete the storage from the cluster In the above example, using the -f flag force deletes the storage without asking user permission. 3.5.9.4. Adding storage to specific container If your devfile has multiple containers, you can specify which container you want the storage to attach to, using the --container flag in the odo storage create command. The following example is an excerpt from a devfile with multiple containers : components: - name: nodejs1 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi endpoints: - name: "3000-tcp" targetPort: 3000 mountSources: true - name: nodejs2 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi In the example, there are two containers, nodejs1 and nodejs2 . To attach storage to the nodejs2 container, use the following command: USD odo storage create --container Example output: USD odo storage create store --path /data --size 1Gi --container nodejs2 [✓] Added storage store to nodejs-testing-xnfg Please use `odo push` command to make the storage accessible to the component You can list the storage resources, using the odo storage list command: USD odo storage list Example output: The component 'nodejs-testing-xnfg' has the following storage attached: NAME SIZE PATH CONTAINER STATE store 1Gi /data nodejs2 Not Pushed 3.5.10. Common flags The following flags are available with most odo commands: Table 3.1. odo flags Command Description --context Set the context directory where the component is defined. --project Set the project for the component. Defaults to the project defined in the local configuration. If none is available, then current project on the cluster. --app Set the application of the component. Defaults to the application defined in the local configuration. If none is available, then app . --kubeconfig Set the path to the kubeconfig value if not using the default configuration. --show-log Use this flag to see the logs. -f , --force Use this flag to tell the command not to prompt the user for confirmation. -v , --v Set the verbosity level. See Logging in odo for more information. -h , --help Output the help for a command. Note Some flags might not be available for some commands. Run the command with the --help flag to get a list of all the available flags. 3.5.11. JSON output The odo commands that output content generally accept a -o json flag to output this content in JSON format, suitable for other programs to parse this output more easily. The output structure is similar to Kubernetes resources, with the kind , apiVersion , metadata , spec , and status fields. List commands return a List resource, containing an items (or similar) field listing the items of the list, with each item also being similar to Kubernetes resources. Delete commands return a Status resource; see the Status Kubernetes resource . Other commands return a resource associated with the command, for example, Application , Storage , URL , and so on. The full list of commands currently accepting the -o json flag is: Commands Kind (version) Kind (version) of list items Complete content? odo application describe Application (odo.dev/v1alpha1) n/a no odo application list List (odo.dev/v1alpha1) Application (odo.dev/v1alpha1) ? odo catalog list components List (odo.dev/v1alpha1) missing yes odo catalog list services List (odo.dev/v1alpha1) ClusterServiceVersion (operators.coreos.com/v1alpha1) ? odo catalog describe component missing n/a yes odo catalog describe service CRDDescription (odo.dev/v1alpha1) n/a yes odo component create Component (odo.dev/v1alpha1) n/a yes odo component describe Component (odo.dev/v1alpha1) n/a yes odo component list List (odo.dev/v1alpha1) Component (odo.dev/v1alpha1) yes odo config view DevfileConfiguration (odo.dev/v1alpha1) n/a yes odo debug info OdoDebugInfo (odo.dev/v1alpha1) n/a yes odo env view EnvInfo (odo.dev/v1alpha1) n/a yes odo preference view PreferenceList (odo.dev/v1alpha1) n/a yes odo project create Project (odo.dev/v1alpha1) n/a yes odo project delete Status (v1) n/a yes odo project get Project (odo.dev/v1alpha1) n/a yes odo project list List (odo.dev/v1alpha1) Project (odo.dev/v1alpha1) yes odo registry list List (odo.dev/v1alpha1) missing yes odo service create Service n/a yes odo service describe Service n/a yes odo service list List (odo.dev/v1alpha1) Service yes odo storage create Storage (odo.dev/v1alpha1) n/a yes odo storage delete Status (v1) n/a yes odo storage list List (odo.dev/v1alpha1) Storage (odo.dev/v1alpha1) yes odo url list List (odo.dev/v1alpha1) URL (odo.dev/v1alpha1) yes | [
"odo delete --deploy",
"odo login -u developer -p developer",
"odo catalog list components",
"Odo Devfile Components: NAME DESCRIPTION REGISTRY dotnet50 Stack with .NET 5.0 DefaultDevfileRegistry dotnet60 Stack with .NET 6.0 DefaultDevfileRegistry dotnetcore31 Stack with .NET Core 3.1 DefaultDevfileRegistry go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry java-openliberty Java application Maven-built stack using the Open Liberty ru... DefaultDevfileRegistry java-openliberty-gradle Java application Gradle-built stack using the Open Liberty r... DefaultDevfileRegistry java-quarkus Quarkus with Java DefaultDevfileRegistry java-springboot Spring Boot(R) using Java DefaultDevfileRegistry java-vertx Upstream Vert.x using Java DefaultDevfileRegistry java-websphereliberty Java application Maven-built stack using the WebSphere Liber... DefaultDevfileRegistry java-websphereliberty-gradle Java application Gradle-built stack using the WebSphere Libe... DefaultDevfileRegistry java-wildfly Upstream WildFly DefaultDevfileRegistry java-wildfly-bootable-jar Java stack with WildFly in bootable Jar mode, OpenJDK 11 and... DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry nodejs-angular Stack with Angular 12 DefaultDevfileRegistry nodejs-nextjs Stack with Next.js 11 DefaultDevfileRegistry nodejs-nuxtjs Stack with Nuxt.js 2 DefaultDevfileRegistry nodejs-react Stack with React 17 DefaultDevfileRegistry nodejs-svelte Stack with Svelte 3 DefaultDevfileRegistry nodejs-vue Stack with Vue 3 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry python-django Python3.7 with Django DefaultDevfileRegistry",
"curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o odo",
"curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz -o odo.tar.gz tar xvzf odo.tar.gz",
"chmod +x <filename>",
"echo USDPATH",
"odo version",
"C:\\> path",
"C:\\> odo version",
"curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o odo",
"curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.tar.gz -o odo.tar.gz tar xvzf odo.tar.gz",
"chmod +x odo",
"echo USDPATH",
"odo version",
"ext install redhat.vscode-openshift-connector",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift Developer Tools and Services*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"ocp-tools-4.9-for-rhel-8-x86_64-rpms\"",
"yum install odo",
"odo version",
"odo preference view",
"PARAMETER CURRENT_VALUE UpdateNotification NamePrefix Timeout BuildTimeout PushTimeout Ephemeral ConsentTelemetry true",
"odo preference set <key> <value>",
"odo preference set updatenotification false",
"Global preference was successfully updated",
"odo preference unset <key>",
"odo preference unset updatenotification ? Do you want to unset updatenotification in the preference (y/N) y",
"Global preference was successfully updated",
".git *.js tests/",
"components: - image: imageName: quay.io/myusername/myimage dockerfile: uri: ./Dockerfile 1 buildContext: USD{PROJECTS_ROOT} 2 name: component-built-from-dockerfile",
"odo catalog list components",
"NAME DESCRIPTION REGISTRY go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry [...]",
"odo catalog describe component",
"odo catalog describe component nodejs",
"* Registry: DefaultDevfileRegistry 1 Starter Projects: 2 --- name: nodejs-starter attributes: {} description: \"\" subdir: \"\" projectsource: sourcetype: \"\" git: gitlikeprojectsource: commonprojectsource: {} checkoutfrom: null remotes: origin: https://github.com/odo-devfiles/nodejs-ex.git zip: null custom: null",
"odo catalog list services",
"Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database redis-operator.v0.8.0 RedisCluster, Redis",
"odo catalog search service",
"odo catalog search service postgres",
"Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database",
"odo catalog describe service",
"odo catalog describe service postgresql-operator.v0.1.1/Database",
"KIND: Database VERSION: v1alpha1 DESCRIPTION: Database is the Schema for the the Database Database API FIELDS: awsAccessKeyId (string) AWS S3 accessKey/token ID Key ID of AWS S3 storage. Default Value: nil Required to create the Secret with the data to allow send the backup files to AWS S3 storage. [...]",
"odo catalog describe service redis-operator.v0.8.0",
"NAME: redis-operator.v0.8.0 DESCRIPTION: A Golang based redis operator that will make/oversee Redis standalone/cluster mode setup on top of the Kubernetes. It can create a redis cluster setup with best practices on Cloud as well as the Bare metal environment. Also, it provides an in-built monitoring capability using ... (cut short for beverity) Logging Operator is licensed under [Apache License, Version 2.0](https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/LICENSE) CRDs: NAME DESCRIPTION RedisCluster Redis Cluster Redis Redis",
"odo create nodejs mynodejs",
"odo create nodejs mynodejs --context ./node-backend",
"odo create nodejs --app myapp --project backend",
"odo catalog describe component nodejs",
"odo create nodejs --starter nodejs-starter",
"odo create mynodejs --devfile https://raw.githubusercontent.com/odo-devfiles/registry/master/devfiles/nodejs/devfile.yaml",
"odo create ? Which devfile component type do you wish to create go ? What do you wish to name the new devfile component go-api ? What project do you want the devfile component to be created in default Devfile Object Validation [✓] Checking devfile existence [164258ns] [✓] Creating a devfile component from registry: DefaultDevfileRegistry [246051ns] Validation [✓] Validating if devfile name is correct [92255ns] ? Do you want to download a starter project Yes Starter Project [✓] Downloading starter project go-starter from https://github.com/devfile-samples/devfile-stack-go.git [429ms] Please use odo push command to create the component with source deployed",
"odo delete",
"odo delete --deploy",
"odo delete --all",
"schemaVersion: 2.2.0 [...] variables: CONTAINER_IMAGE: quay.io/phmartin/myimage commands: - id: build-image apply: component: outerloop-build - id: deployk8s apply: component: outerloop-deploy - id: deploy composite: commands: - build-image - deployk8s group: kind: deploy isDefault: true components: - name: outerloop-build image: imageName: \"{{CONTAINER_IMAGE}}\" dockerfile: uri: ./Dockerfile buildContext: USD{PROJECTS_ROOT} - name: outerloop-deploy kubernetes: inlined: | kind: Deployment apiVersion: apps/v1 metadata: name: my-component spec: replicas: 1 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: main image: {{CONTAINER_IMAGE}}",
"odo list",
"APP NAME PROJECT TYPE STATE MANAGED BY ODO app backend myproject spring Pushed Yes",
"odo service list",
"NAME MANAGED BY ODO STATE AGE PostgresCluster/hippo Yes (backend) Pushed 59m41s",
"odo link PostgresCluster/hippo",
"[✓] Successfully created link between component \"backend\" and service \"PostgresCluster/hippo\" To apply the link, please use `odo push`",
"odo url list",
"Found the following URLs for component backend NAME STATE URL PORT SECURE KIND 8080-tcp Pushed http://8080-tcp.192.168.39.112.nip.io 8080 false ingress",
"odo describe",
"Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Environment Variables: · POSTGRESCLUSTER_PGBOUNCER-EMPTY · POSTGRESCLUSTER_PGBOUNCER.INI · POSTGRESCLUSTER_ROOT.CRT · POSTGRESCLUSTER_VERIFIER · POSTGRESCLUSTER_ID_ECDSA · POSTGRESCLUSTER_PGBOUNCER-VERIFIER · POSTGRESCLUSTER_TLS.CRT · POSTGRESCLUSTER_PGBOUNCER-URI · POSTGRESCLUSTER_PATRONI.CRT-COMBINED · POSTGRESCLUSTER_USER · pgImage · pgVersion · POSTGRESCLUSTER_CLUSTERIP · POSTGRESCLUSTER_HOST · POSTGRESCLUSTER_PGBACKREST_REPO.CONF · POSTGRESCLUSTER_PGBOUNCER-USERS.TXT · POSTGRESCLUSTER_SSH_CONFIG · POSTGRESCLUSTER_TLS.KEY · POSTGRESCLUSTER_CONFIG-HASH · POSTGRESCLUSTER_PASSWORD · POSTGRESCLUSTER_PATRONI.CA-ROOTS · POSTGRESCLUSTER_DBNAME · POSTGRESCLUSTER_PGBOUNCER-PASSWORD · POSTGRESCLUSTER_SSHD_CONFIG · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.KEY · POSTGRESCLUSTER_PGBACKREST_INSTANCE.CONF · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CA-ROOTS · POSTGRESCLUSTER_PGBOUNCER-HOST · POSTGRESCLUSTER_PORT · POSTGRESCLUSTER_ROOT.KEY · POSTGRESCLUSTER_SSH_KNOWN_HOSTS · POSTGRESCLUSTER_URI · POSTGRESCLUSTER_PATRONI.YAML · POSTGRESCLUSTER_DNS.CRT · POSTGRESCLUSTER_DNS.KEY · POSTGRESCLUSTER_ID_ECDSA.PUB · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CRT · POSTGRESCLUSTER_PGBOUNCER-PORT · POSTGRESCLUSTER_CA.CRT",
"ls kubernetes odo-service-backend-postgrescluster-hippo.yaml odo-service-hippo.yaml",
"odo unlink PostgresCluster/hippo",
"[✓] Successfully unlinked component \"backend\" from service \"PostgresCluster/hippo\" To apply the changes, please use `odo push`",
"ls kubernetes odo-service-hippo.yaml",
"odo link PostgresCluster/hippo --inlined",
"[✓] Successfully created link between component \"backend\" and service \"PostgresCluster/hippo\" To apply the link, please use `odo push`",
"kubernetes: inlined: | apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: creationTimestamp: null name: backend-postgrescluster-hippo spec: application: group: apps name: backend-app resource: deployments version: v1 bindAsFiles: false detectBindingResources: true services: - group: postgres-operator.crunchydata.com id: hippo kind: PostgresCluster name: hippo version: v1beta1 status: secret: \"\" name: backend-postgrescluster-hippo",
"odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}'",
"odo exec -- env | grep pgVersion",
"pgVersion=13",
"odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}'",
"odo exec -- env | grep -e \"pgVersion\\|pgImage\"",
"pgVersion=13 pgImage=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0",
"Linked Services: · PostgresCluster/hippo",
"odo unlink PostgresCluster/hippo odo push",
"odo link PostgresCluster/hippo --bind-as-files odo push",
"odo describe Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 · SERVICE_BINDING_ROOT=/bindings · SERVICE_BINDING_ROOT=/bindings Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Files: · /bindings/backend-postgrescluster-hippo/pgbackrest_instance.conf · /bindings/backend-postgrescluster-hippo/user · /bindings/backend-postgrescluster-hippo/ssh_known_hosts · /bindings/backend-postgrescluster-hippo/clusterIP · /bindings/backend-postgrescluster-hippo/password · /bindings/backend-postgrescluster-hippo/patroni.yaml · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-host · /bindings/backend-postgrescluster-hippo/root.key · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.key · /bindings/backend-postgrescluster-hippo/pgbouncer.ini · /bindings/backend-postgrescluster-hippo/uri · /bindings/backend-postgrescluster-hippo/config-hash · /bindings/backend-postgrescluster-hippo/pgbouncer-empty · /bindings/backend-postgrescluster-hippo/port · /bindings/backend-postgrescluster-hippo/dns.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-uri · /bindings/backend-postgrescluster-hippo/root.crt · /bindings/backend-postgrescluster-hippo/ssh_config · /bindings/backend-postgrescluster-hippo/dns.key · /bindings/backend-postgrescluster-hippo/host · /bindings/backend-postgrescluster-hippo/patroni.crt-combined · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.ca-roots · /bindings/backend-postgrescluster-hippo/tls.key · /bindings/backend-postgrescluster-hippo/verifier · /bindings/backend-postgrescluster-hippo/ca.crt · /bindings/backend-postgrescluster-hippo/dbname · /bindings/backend-postgrescluster-hippo/patroni.ca-roots · /bindings/backend-postgrescluster-hippo/pgbackrest_repo.conf · /bindings/backend-postgrescluster-hippo/pgbouncer-port · /bindings/backend-postgrescluster-hippo/pgbouncer-verifier · /bindings/backend-postgrescluster-hippo/id_ecdsa · /bindings/backend-postgrescluster-hippo/id_ecdsa.pub · /bindings/backend-postgrescluster-hippo/pgbouncer-password · /bindings/backend-postgrescluster-hippo/pgbouncer-users.txt · /bindings/backend-postgrescluster-hippo/sshd_config · /bindings/backend-postgrescluster-hippo/tls.crt",
"odo exec -- cat /bindings/backend-postgrescluster-hippo/password",
"q({JC:jn^mm/Bw}eu+j.GX{k",
"odo exec -- cat /bindings/backend-postgrescluster-hippo/user",
"hippo",
"odo exec -- cat /bindings/backend-postgrescluster-hippo/clusterIP",
"10.101.78.56",
"odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}' --bind-as-files odo push",
"odo exec -- cat /bindings/backend-postgrescluster-hippo/pgVersion",
"13",
"odo exec -- cat /bindings/backend-postgrescluster-hippo/pgImage",
"registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0",
"odo registry list",
"NAME URL SECURE DefaultDevfileRegistry https://registry.devfile.io No",
"odo registry add",
"odo registry add StageRegistry https://registry.stage.devfile.io New registry successfully added",
"odo registry add MyRegistry https://myregistry.example.com --token <access_token> New registry successfully added",
"odo registry delete",
"odo registry delete StageRegistry ? Are you sure you want to delete registry \"StageRegistry\" Yes Successfully deleted registry",
"odo registry update",
"odo registry update MyRegistry https://otherregistry.example.com --token <other_access_token> ? Are you sure you want to update registry \"MyRegistry\" Yes Successfully updated registry",
"odo service create",
"odo catalog list services Services available through Operators NAME CRDs redis-operator.v0.8.0 RedisCluster, Redis odo service create redis-operator.v0.8.0/Redis my-redis-service Successfully added service to the configuration; do 'odo push' to create service on the cluster",
"cat kubernetes/odo-service-my-redis-service.yaml",
"apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"cat devfile.yaml",
"[...] components: - kubernetes: uri: kubernetes/odo-service-my-redis-service.yaml name: my-redis-service [...]",
"odo service create redis-operator.v0.8.0/Redis",
"odo service create redis-operator.v0.8.0/Redis my-redis-service --inlined Successfully added service to the configuration; do 'odo push' to create service on the cluster",
"cat devfile.yaml",
"[...] components: - kubernetes: inlined: | apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: my-redis-service [...]",
"odo service create redis-operator.v0.8.0/Redis my-redis-service -p kubernetesConfig.image=quay.io/opstree/redis:v6.2.5 -p kubernetesConfig.serviceType=ClusterIP -p redisExporter.image=quay.io/opstree/redis-exporter:1.0 Successfully added service to the configuration; do 'odo push' to create service on the cluster",
"cat kubernetes/odo-service-my-redis-service.yaml",
"apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0",
"cat > my-redis.yaml <<EOF apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0 EOF",
"odo service create --from-file my-redis.yaml Successfully added service to the configuration; do 'odo push' to create service on the cluster",
"odo service delete",
"odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service Yes (api) Deleted locally 5m39s",
"odo service delete Redis/my-redis-service ? Are you sure you want to delete Redis/my-redis-service Yes Service \"Redis/my-redis-service\" has been successfully deleted; do 'odo push' to delete service from the cluster",
"odo service list",
"odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service-1 Yes (api) Not pushed Redis/my-redis-service-2 Yes (api) Pushed 52s Redis/my-redis-service-3 Yes (api) Deleted locally 1m22s",
"odo service describe",
"odo service describe Redis/my-redis-service Version: redis.redis.opstreelabs.in/v1beta1 Kind: Redis Name: my-redis-service Parameters: NAME VALUE kubernetesConfig.image quay.io/opstree/redis:v6.2.5 kubernetesConfig.serviceType ClusterIP redisExporter.image quay.io/opstree/redis-exporter:1.0",
"odo storage create",
"odo storage create store --path /data --size 1Gi [✓] Added storage store to nodejs-project-ufyy odo storage create tempdir --path /tmp --size 2Gi --ephemeral [✓] Added storage tempdir to nodejs-project-ufyy Please use `odo push` command to make the storage accessible to the component",
"odo storage list",
"odo storage list The component 'nodejs-project-ufyy' has the following storage attached: NAME SIZE PATH STATE store 1Gi /data Not Pushed tempdir 2Gi /tmp Not Pushed",
"odo storage delete",
"odo storage delete store -f Deleted storage store from nodejs-project-ufyy Please use `odo push` command to delete the storage from the cluster",
"components: - name: nodejs1 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi endpoints: - name: \"3000-tcp\" targetPort: 3000 mountSources: true - name: nodejs2 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi",
"odo storage create --container",
"odo storage create store --path /data --size 1Gi --container nodejs2 [✓] Added storage store to nodejs-testing-xnfg Please use `odo push` command to make the storage accessible to the component",
"odo storage list",
"The component 'nodejs-testing-xnfg' has the following storage attached: NAME SIZE PATH CONTAINER STATE store 1Gi /data nodejs2 Not Pushed"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/cli_tools/developer-cli-odo |
Chapter 6. Uninstalling a cluster on Nutanix | Chapter 6. Uninstalling a cluster on Nutanix You can remove a cluster that you deployed to Nutanix. 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_nutanix/uninstalling-cluster-nutanix |
Chapter 12. ImageContentPolicy [config.openshift.io/v1] | Chapter 12. ImageContentPolicy [config.openshift.io/v1] Description ImageContentPolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration 12.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description repositoryDigestMirrors array repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To pull image from mirrors by tags, should set the "allowMirrorByTags". Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. repositoryDigestMirrors[] object RepositoryDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. 12.1.2. .spec.repositoryDigestMirrors Description repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To pull image from mirrors by tags, should set the "allowMirrorByTags". Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Type array 12.1.3. .spec.repositoryDigestMirrors[] Description RepositoryDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. Type object Required source Property Type Description allowMirrorByTags boolean allowMirrorByTags if true, the mirrors can be used to pull the images that are referenced by their tags. Default is false, the mirrors only work when pulling the images that are referenced by their digests. Pulling images by tag can potentially yield different images, depending on which endpoint we pull from. Forcing digest-pulls for mirrors avoids that issue. mirrors array (string) mirrors is zero or more repositories that may also contain the same images. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. No mirror will be configured. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. Other cluster configuration, including (but not limited to) other repositoryDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. source string source is the repository that users refer to, e.g. in image pull specifications. 12.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/imagecontentpolicies DELETE : delete collection of ImageContentPolicy GET : list objects of kind ImageContentPolicy POST : create an ImageContentPolicy /apis/config.openshift.io/v1/imagecontentpolicies/{name} DELETE : delete an ImageContentPolicy GET : read the specified ImageContentPolicy PATCH : partially update the specified ImageContentPolicy PUT : replace the specified ImageContentPolicy /apis/config.openshift.io/v1/imagecontentpolicies/{name}/status GET : read status of the specified ImageContentPolicy PATCH : partially update status of the specified ImageContentPolicy PUT : replace status of the specified ImageContentPolicy 12.2.1. /apis/config.openshift.io/v1/imagecontentpolicies Table 12.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ImageContentPolicy Table 12.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageContentPolicy Table 12.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.5. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageContentPolicy Table 12.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.7. Body parameters Parameter Type Description body ImageContentPolicy schema Table 12.8. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 201 - Created ImageContentPolicy schema 202 - Accepted ImageContentPolicy schema 401 - Unauthorized Empty 12.2.2. /apis/config.openshift.io/v1/imagecontentpolicies/{name} Table 12.9. Global path parameters Parameter Type Description name string name of the ImageContentPolicy Table 12.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageContentPolicy Table 12.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 12.12. Body parameters Parameter Type Description body DeleteOptions schema Table 12.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageContentPolicy Table 12.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.15. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageContentPolicy Table 12.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.17. Body parameters Parameter Type Description body Patch schema Table 12.18. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageContentPolicy Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body ImageContentPolicy schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 201 - Created ImageContentPolicy schema 401 - Unauthorized Empty 12.2.3. /apis/config.openshift.io/v1/imagecontentpolicies/{name}/status Table 12.22. Global path parameters Parameter Type Description name string name of the ImageContentPolicy Table 12.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ImageContentPolicy Table 12.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.25. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageContentPolicy Table 12.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.27. Body parameters Parameter Type Description body Patch schema Table 12.28. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageContentPolicy Table 12.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.30. Body parameters Parameter Type Description body ImageContentPolicy schema Table 12.31. HTTP responses HTTP code Reponse body 200 - OK ImageContentPolicy schema 201 - Created ImageContentPolicy schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/imagecontentpolicy-config-openshift-io-v1 |
Installing Red Hat Update Infrastructure | Installing Red Hat Update Infrastructure Red Hat Update Infrastructure 4 List of requirements, setting up nodes, configuring storage, and installing Red Hat Update Infrastructure 4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/installing_red_hat_update_infrastructure/index |
Chapter 4. Ceph authentication configuration | Chapter 4. Ceph authentication configuration As a storage administrator, authenticating users and services is important to the security of the Red Hat Ceph Storage cluster. Red Hat Ceph Storage includes the Cephx protocol, as the default, for cryptographic authentication, and the tools to manage authentication in the storage cluster. Red Hat Ceph Storage includes the Cephx protocol, as the default, for cryptographic authentication, and the tools to manage authentication in the storage cluster. As part of the Ceph authentication configuration, consider key rotation for your Ceph and gateway daemons for increased security. Key rotation is done through the command-line, with cephadm . See Enabling key rotation for more details. Prerequisites Installation of the Red Hat Ceph Storage software. 4.1. Cephx authentication The cephx protocol is enabled by default. Cryptographic authentication has some computational costs, though they are generally quite low. If the network environment connecting clients and hosts is considered safe and you cannot afford authentication computational costs, you can disable it. When deploying a Ceph storage cluster, the deployment tool will create the client.admin user and keyring. Important Red Hat recommends using authentication. Note If you disable authentication, you are at risk of a man-in-the-middle attack altering client and server messages, which could lead to significant security issues. Enabling and disabling Cephx Enabling Cephx requires that you have deployed keys for the Ceph Monitors and OSDs. When toggling Cephx authentication on or off, you do not have to repeat the deployment procedures. 4.2. Enabling Cephx When cephx is enabled, Ceph will look for the keyring in the default search path, which includes /etc/ceph/USDcluster.USDname.keyring . You can override this location by adding a keyring option in the [global] section of the Ceph configuration file, but this is not recommended. Execute the following procedures to enable cephx on a cluster with authentication disabled. If you or your deployment utility have already generated the keys, you may skip the steps related to generating keys. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Create a client.admin key, and save a copy of the key for your client host: Warning This will erase the contents of any existing /etc/ceph/client.admin.keyring file. Do not perform this step if a deployment tool has already done it for you. Create a keyring for the monitor cluster and generate a monitor secret key: Copy the monitor keyring into a ceph.mon.keyring file in every monitor mon data directory. For example, to copy it to mon.a in cluster ceph , use the following: Generate a secret key for every OSD, where ID is the OSD number: By default the cephx authentication protocol is enabled. Note If the cephx authentication protocol was disabled previously by setting the authentication options to none , then by removing the following lines under the [global] section in the Ceph configuration file ( /etc/ceph/ceph.conf ) will reenable the cephx authentication protocol: Start or restart the Ceph storage cluster. Important Enabling cephx requires downtime because the cluster needs to be completely restarted, or it needs to be shut down and then started while client I/O is disabled. These flags need to be set before restarting or shutting down the storage cluster: Once cephx is enabled and all PGs are active and clean, unset the flags: 4.3. Disabling Cephx The following procedure describes how to disable Cephx. If your cluster environment is relatively safe, you can offset the computation expense of running authentication. Important Red Hat recommends enabling authentication. However, it may be easier during setup or troubleshooting to temporarily disable authentication. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Disable cephx authentication by setting the following options in the [global] section of the Ceph configuration file: Example Start or restart the Ceph storage cluster. 4.4. Cephx user keyrings When you run Ceph with authentication enabled, the ceph administrative commands and Ceph clients require authentication keys to access the Ceph storage cluster. The most common way to provide these keys to the ceph administrative commands and clients is to include a Ceph keyring under the /etc/ceph/ directory. The file name is usually ceph.client.admin.keyring or USDcluster.client.admin.keyring . If you include the keyring under the /etc/ceph/ directory, you do not need to specify a keyring entry in the Ceph configuration file. Important Red Hat recommends copying the Red Hat Ceph Storage cluster keyring file to nodes where you will run administrative commands, because it contains the client.admin key. To do so, execute the following command: Replace USER with the user name used on the host with the client.admin key and HOSTNAME with the host name of that host. Note Ensure the ceph.keyring file has appropriate permissions set on the client machine. You can specify the key itself in the Ceph configuration file using the key setting, which is not recommended, or a path to a key file using the keyfile setting. 4.5. Cephx daemon keyrings Administrative users or deployment tools might generate daemon keyrings in the same way as generating user keyrings. By default, Ceph stores daemons keyrings inside their data directory. The default keyring locations, and the capabilities necessary for the daemon to function. Note The monitor keyring contains a key but no capabilities, and is not part of the Ceph storage cluster auth database. The daemon data directory locations default to directories of the form: Example You can override these locations, but it is not recommended. 4.6. Cephx message signatures Ceph provides fine-grained control so you can enable or disable signatures for service messages between the client and Ceph. You can enable or disable signatures for messages between Ceph daemons. Important Red Hat recommends that Ceph authenticate all ongoing messages between the entities using the session key set up for that initial authentication. Note Ceph kernel modules do not support signatures yet. | [
"ceph auth get-or-create client.admin mon 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring",
"ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'",
"cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-a/keyring",
"ceph auth get-or-create osd. ID mon 'allow rwx' osd 'allow *' -o /var/lib/ceph/osd/ceph- ID /keyring",
"auth_cluster_required = none auth_service_required = none auth_client_required = none",
"ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause",
"ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause",
"auth_cluster_required = none auth_service_required = none auth_client_required = none",
"scp USER @ HOSTNAME :/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring",
"/var/lib/ceph/USDtype/ CLUSTER - ID",
"/var/lib/ceph/osd/ceph-12"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/ceph-authentication-configuration |
Chapter 18. Automating Configuration Tasks using Ansible | Chapter 18. Automating Configuration Tasks using Ansible Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for Red Hat Virtualization, and Ansible modules are available to allow you to automate post-installation tasks such as data center setup and configuration, managing users, or virtual machine operations. Ansible provides an easier method of automating Red Hat Virtualization configuration compared to REST APIs and SDKs, and allows you to integrate with other Ansible modules. For more information about the Ansible modules available for Red Hat Virtualization, see the Ovirt modules in the Ansible documentation. Note Ansible Tower is a graphically enabled framework accessible through a web interface and REST APIs for Ansible. If you want support for Ansible Tower, then you must have an Ansible Tower license, which is not part of the Red Hat Virtualization subscription. Ansible is shipped with Red Hat Virtualization. To install Ansible, run the following command on the Manager machine: See the Ansible Documentation for alternate installation instructions, and information about using Ansible. Note To permanently increase the verbose level for the Manager when running Ansible playbooks, create a configuration file in /etc/ovirt-engine/engine.conf.d/ with following line: ANSIBLE_PLAYBOOK_VERBOSE_LEVEL=4 You must restart the Manager after creating the file by running systemctl restart ovirt-engine . 18.1. Ansible Roles Multiple Ansible roles are available to help configure and manage various parts of the Red Hat Virtualization infrastructure. Ansible roles provide a method of modularizing Ansible code by breaking up large playbooks into smaller, reusable files that can be shared with other users. The Ansible roles available for Red Hat Virtualization are categorized by the various infrustructure components. For more information about the Ansible roles, see the oVirt Ansible Roles documentation. For the documentation installed with Ansible roles, see Section 18.1.1, "Installing Ansible Roles" . 18.1.1. Installing Ansible Roles You can install Ansible roles for Red Hat Virtualization from the Red Hat Virtualization Manager repository. Use the following command to install the Ansible roles on the Manager machine: By default the roles are installed to /usr/share/ansible/roles . The structure of the ovirt-ansible-roles package is as follows: /usr/share/ansible/roles - stores the roles. /usr/share/doc/ovirt-ansible-roles/ - stores the examples, a basic overview, and the licence. /usr/share/doc/ansible/roles/ role_name - stores the documentation specific to the role. 18.1.2. Using Ansible Roles to Configure Red Hat Virtualization The following procedure guides you through creating and running a playbook that uses Ansible roles to configure Red Hat Virtualization. This example uses Ansible to connect to the Manager on the local machine and create a new data center. Prerequisites Ensure the roles_path option in /etc/ansible/ansible.cfg points to the location of your Ansible roles ( /usr/share/ansible/roles ). Ensure that you have the Python SDK installed on the machine running the playbook. Configuring Red Hat Virtualization using Ansible Roles Create a file in your working directory to store the Red Hat Virtualization Manager user password: Encrypt the user password. You will be asked for a Vault password. Create a file that stores the Manager details such as the URL, certificate location, and user. Note If you prefer, these variables can be added directly to the playbook instead. Create your playbook. To simplify this you can copy and modify an example in /usr/share/doc/ovirt-ansible-roles/examples . Run the playbook. You have successfully used the ovirt-datacenters Ansible role to create a data center named mydatacenter . | [
"yum install ansible",
"yum install ovirt-ansible-roles",
"cat passwords.yml --- engine_password: youruserpassword",
"ansible-vault encrypt passwords.yml New Vault password: Confirm New Vault password:",
"cat engine_vars.yml --- engine_url: https://example.engine.redhat.com/ovirt-engine/api engine_user: admin@internal engine_cafile: /etc/pki/ovirt-engine/ca.pem",
"cat rhv_infra.yml --- - name: RHV infrastructure hosts: localhost connection: local gather_facts: false vars_files: # Contains variables to connect to the Manager - engine_vars.yml # Contains encrypted engine_password variable using ansible-vault - passwords.yml pre_tasks: - name: Login to RHV ovirt_auth: url: \"{{ engine_url }}\" username: \"{{ engine_user }}\" password: \"{{ engine_password }}\" ca_file: \"{{ engine_cafile | default(omit) }}\" insecure: \"{{ engine_insecure | default(true) }}\" tags: - always vars: data_center_name: mydatacenter data_center_description: mydatacenter data_center_local: false compatibility_version: 4.1 roles: - ovirt-datacenters post_tasks: - name: Logout from RHV ovirt_auth: state: absent ovirt_auth: \"{{ ovirt_auth }}\" tags: - always",
"ansible-playbook --ask-vault-pass rhv_infra.yml"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-automating_rhv_configuration_using_ansible |
Windows Container Support for OpenShift | Windows Container Support for OpenShift OpenShift Container Platform 4.16 Red Hat OpenShift for Windows Containers Guide Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/windows_container_support_for_openshift/index |
8.131. opencv | 8.131. opencv 8.131.1. RHBA-2013:1118 - opencv bug fix update Updated opencv packages that fix one bug are now available for Red Hat Enterprise Linux 6. OpenCV is the open source computer vision library. It is a collection of C functions and C++ classes that implement Image Processing and Computer Vision algorithms. Bug Fix BZ#658060 The OpenCVConfig.cmake file had different contents on 32-bit and 64-bit architecture and was installed under the /usr/share directory. Consequently, the opencv-devel package could not be installed in a multilib environment. With this update, the OpenCVConfig.cmake file has been moved to the /usr/lib(64) directory and the opencv-devel package can now be installed in a multilib environment. Users of opencv are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/opencv |
Chapter 10. Uninstalling a cluster on VMC | Chapter 10. Uninstalling a cluster on VMC You can remove a cluster installed on VMware vSphere infrastructure that you deployed to VMware Cloud (VMC) on AWS by using installer-provisioned infrastructure. 10.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vmc/uninstalling-cluster-vmc |
Chapter 3. Selecting an installed Red Hat build of OpenJDK version for a specific application | Chapter 3. Selecting an installed Red Hat build of OpenJDK version for a specific application Some applications require a specific Red Hat build of OpenJDK version to run. If multiple versions of Red Hat build of OpenJDK are installed on the system using the yum package manager or portable bundle, you can select a Red Hat build of OpenJDK version for each application where necessary by setting the value of the JAVA_HOME environment variable or using a wrapper script. Prerequisites Multiple versions of Red Hat build of OpenJDK installed on the machine. Ensure that the application you want to run is installed. Procedure Set the JAVA_HOME environment variable. For example, if openjdk-8 was installed using yum : USD JAVA_HOME=/usr/lib/jvm/java-8-openjdk Note The symbolic link java-8-openjdk is controlled by the alternatives command. Do one of the following: Launch the application using the default, system-wide configuration. Launch the application specifying the JAVA_HOME variable: | [
"mvn --version Apache Maven 3.5.4 (Red Hat 3.5.4-5) Maven home: /usr/share/maven Java version: 1.8.0_242, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el8_1.x86_64/jre Default locale: en_US, platform encoding: UTF-8 OS name: \"linux\", version: \"4.18.0-147.3.1.el8_1.x86_64\", arch: \"amd64\", family: \"unix\"",
"JAVA_HOME=/usr/lib/jvm/java-8-openjdk mvn --version Apache Maven 3.5.4 (Red Hat 3.5.4-5) Maven home: /usr/share/maven Java version: 1.8.0_242, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-8-openjdk-1.8.0.242.b08-0.el8_1.x86_64 Default locale: en_US, platform encoding: UTF-8 OS name: \"linux\", version: \"5.4.12-200.el8_1.x86_64\", arch: \"amd64\", family: \"unix\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/configuring_red_hat_build_of_openjdk_8_for_rhel/selecting-installed-openjdk8-version-for-specific-application |
3.2. Creating Guests with virt-install | 3.2. Creating Guests with virt-install You can use the virt-install command to create virtual machines and install operating system on those virtual machines from the command line. virt-install can be used either interactively or as part of a script to automate the creation of virtual machines. If you are using an interactive graphical installation, you must have virt-viewer installed before you run virt-install . In addition, you can start an unattended installation of virtual machine operating systems using virt-install with kickstart files. Note You might need root privileges in order for some virt-install commands to complete successfully. The virt-install utility uses a number of command-line options. However, most virt-install options are not required. The main required options for virtual guest machine installations are: --name The name of the virtual machine. --memory The amount of memory (RAM) to allocate to the guest, in MiB. Guest storage Use one of the following guest storage options: --disk The storage configuration details for the virtual machine. If you use the --disk none option, the virtual machine is created with no disk space. --filesystem The path to the file system for the virtual machine guest. Installation method Use one of the following installation methods: --location The location of the installation media. --cdrom The file or device used as a virtual CD-ROM device. It can be path to an ISO image, or a URL from which to fetch or access a minimal boot ISO image. However, it can not be a physical host CD-ROM or DVD-ROM device. --pxe Uses the PXE boot protocol to load the initial ramdisk and kernel for starting the guest installation process. --import Skips the OS installation process and builds a guest around an existing disk image. The device used for booting is the first device specified by the disk or filesystem option. --boot The post-install VM boot configuration. This option allows specifying a boot device order, permanently booting off kernel and initrd with optional kernel arguments and enabling a BIOS boot menu. To see a complete list of options, enter the following command: To see a complete list of attributes for an option, enter the following command: The virt-install man page also documents each command option, important variables, and examples. Prior to running virt-install , you may also need to use qemu-img to configure storage options. For instructions on using qemu-img , see Chapter 14, Using qemu-img . 3.2.1. Installing a virtual machine from an ISO image The following example installs a virtual machine from an ISO image: The --cdrom /path/to/rhel7.iso option specifies that the virtual machine will be installed from the CD or DVD image at the specified location. 3.2.2. Importing a virtual machine image The following example imports a virtual machine from a virtual disk image: The --import option specifies that the virtual machine will be imported from the virtual disk image specified by the --disk /path/to/imported/disk.qcow option. 3.2.3. Installing a virtual machine from the network The following example installs a virtual machine from a network location: The --location http://example.com/path/to/os option specifies that the installation tree is at the specified network location. 3.2.4. Installing a virtual machine using PXE When installing a virtual machine using the PXE boot protocol, both the --network option specifying a bridged network and the --pxe option must be specified. The following example installs a virtual machine using PXE: 3.2.5. Installing a virtual machine with Kickstart The following example installs a virtual machine using a kickstart file: The initrd-inject and the extra-args options specify that the virtual machine will be installed using a Kickstarter file. 3.2.6. Configuring the guest virtual machine network during guest creation When creating a guest virtual machine, you can specify and configure the network for the virtual machine. This section provides the options for each of the guest virtual machine main network types. Default network with NAT The default network uses libvirtd 's network address translation (NAT) virtual network switch. For more information about NAT, see Section 6.1, "Network Address Translation (NAT) with libvirt" . Before creating a guest virtual machine with the default network with NAT, ensure that the libvirt-daemon-config-network package is installed. To configure a NAT network for the guest virtual machine, use the following option for virt-install : Note If no network option is specified, the guest virtual machine is configured with a default network with NAT. Bridged network with DHCP When configured for bridged networking, the guest uses an external DHCP server. This option should be used if the host has a static networking configuration and the guest requires full inbound and outbound connectivity with the local area network (LAN). It should be used if live migration will be performed with the guest virtual machine. To configure a bridged network with DHCP for the guest virtual machine, use the following option: Note The bridge must be created separately, prior to running virt-install . For details on creating a network bridge, see Section 6.4.1, "Configuring Bridged Networking on a Red Hat Enterprise Linux 7 Host" . Bridged network with a static IP address Bridged networking can also be used to configure the guest to use a static IP address. To configure a bridged network with a static IP address for the guest virtual machine, use the following options: For more information on network booting options, see the Red Hat Enterprise Linux 7 Installation Guide . No network To configure a guest virtual machine with no network interface, use the following option: | [
"virt-install --help",
"virt install -- option =?",
"virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk size=8 --cdrom /path/to/rhel7.iso --os-variant rhel7",
"virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk /path/to/imported/disk.qcow --import --os-variant rhel7",
"virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk size=8 --location http://example.com/path/to/os --os-variant rhel7",
"virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk size=8 --network=bridge:br0 --pxe --os-variant rhel7",
"virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk size=8 --location http://example.com/path/to/os --os-variant rhel7 --initrd-inject /path/to/ks.cfg --extra-args=\"ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8\"",
"--network default",
"--network br0",
"--network br0 --extra-args \"ip= 192.168.1.2::192.168.1.1:255.255.255.0:test.example.com:eth0:none \"",
"--network=none"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_installation_overview-creating_guests_with_virt_install |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.