title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
Chapter 6. Managing remote systems in the web console
Chapter 6. Managing remote systems in the web console You can connect to the remote systems and manage them in the RHEL 8 web console. You learn: The optimal topology of connected systems. How to add and remove remote systems. When, why, and how to use SSH keys for remote system authentication. How to configure a web console client to allow a user authenticated with a smart card to SSH to a remote host and access services on it. Prerequisites The SSH service is running on remote systems. 6.1. Remote system manager in the web console For security reasons, use the following network setup of remote systems managed by the the RHEL 8 web console: Configure one system with the web console as a bastion host. The bastion host is a system with opened HTTPS port. All other systems communicate through SSH. With the web interface running on the bastion host, you can reach all other systems through the SSH protocol using port 22 in the default configuration. 6.2. Adding remote hosts to the web console In the RHEL web console, you can manage remote systems after you add them with the corresponding credentials. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the RHEL 8 web console, click your <username> @ <hostname> in the top left corner of the Overview page. From the drop-down menu, click Add new host . In the Add new host dialog box, specify the host you want to add. Optional: Add the user name for the account to which you want to connect. You can use any user account of the remote system. However, if you use the credentials of a user account without administration privileges, you cannot perform administration tasks. If you use the same credentials as on your local system, the web console authenticates remote systems automatically every time you log in. Note that using the same credentials on more systems weakens the security. Optional: Click the Color field to change the color of the system. Click Add . Important The web console does not save passwords used to log in to remote systems, which means that you must log in again after each system restart. time you log in, click Log in placed on the main screen of the disconnected remote system to open the login dialog. Verification The new host is listed in the <username> @ <hostname> drop-down menu. 6.3. Enabling SSH login for a new host When you add a new host to the web console, you can also log in to the host with an SSH key. If you already have an SSH key on your system, the web console uses the existing one; otherwise, the web console can create a key. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the RHEL 8 web console, click your <username> @ <hostname> in the top left corner of the Overview page. From the drop-down menu, click Add new host . In the Add new host dialog box, specify the host you want to add. Add the user name for the account to which you want to connect. You can use any user account of the remote system. However, if you use a user account without administration privileges, you cannot perform administration tasks. Optional: Click the Color field to change the color of the system. Click Add . A new dialog window appears asking for a password. Enter the user account password. Check Authorize SSH key if you already have an SSH key. Check Create a new SSH key and authorize it if you do not have an SSH key. The web console creates the key. Add a password for the SSH key. Confirm the password. Click Log in . Verification Log out. Log back in. Click Log in in the Not connected to host screen. Select SSH key as your authentication option. Enter your key password. Click Log in . Additional resources Using secure communications between two systems with OpenSSH 6.4. Configuring a web console to allow a user authenticated with a smart card to SSH to a remote host without being asked to authenticate again After you have logged in to a user account on the RHEL web console, as an Identity Management (IdM) system administrator you might need to connect to remote machines by using the SSH protocol. You can use the constrained delegation feature to use SSH without being asked to authenticate again. Follow this procedure to configure the web console to use constrained delegation. In the example below, the web console session runs on the myhost.idm.example.com host and it is being configured to access the remote.idm.example.com host by using SSH on behalf of the authenticated user. Prerequisites You have obtained an IdM admin ticket-granting ticket (TGT). You have root access to remote.idm.example.com . The web console service is present in IdM. The remote.idm.example.com host is present in IdM. The web console has created an S4U2Proxy Kerberos ticket in the user session. To verify that this is the case, log in to the web console as an IdM user, open the Terminal page, and enter: Procedure Create a list of the target hosts that can be accessed by the delegation rule: Create a service delegation target: Add the target host to the delegation target: Allow cockpit sessions to access the target host list by creating a service delegation rule and adding the HTTP service Kerberos principal to it: Create a service delegation rule: Add the web console client to the delegation rule: Add the delegation target to the delegation rule: Enable Kerberos authentication on the remote.idm.example.com host: SSH to remote.idm.example.com as root . Open the /etc/ssh/sshd_config file for editing. Enable GSSAPIAuthentication by uncommenting the GSSAPIAuthentication no line and replacing it with GSSAPIAuthentication yes . Restart the SSH service on remote.idm.example.com so that the above changes take effect immediately: Additional resources Logging in to the web console with smart cards Constrained delegation in Identity Management 6.5. Using Ansible to configure a web console to allow a user authenticated with a smart card to SSH to a remote host without being asked to authenticate again After you have logged in to a user account on the RHEL web console, as an Identity Management (IdM) system administrator you might need to connect to remote machines by using the SSH protocol. You can use the constrained delegation feature to use SSH without being asked to authenticate again. Follow this procedure to use the servicedelegationrule and servicedelegationtarget ansible-freeipa modules to configure a web console to use constrained delegation. In the example below, the web console session runs on the myhost.idm.example.com host and it is being configured to access the remote.idm.example.com host by using SSH on behalf of the authenticated user. Prerequisites The IdM admin password. root access to remote.idm.example.com . The web console service is present in IdM. The remote.idm.example.com host is present in IdM. The web console has created an S4U2Proxy Kerberos ticket in the user session. To verify that this is the case, log in to the web console as an IdM user, open the Terminal page, and enter: You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Create a web-console-smart-card-ssh.yml playbook with the following content: Create a task that ensures the presence of a delegation target: Add a task that adds the target host to the delegation target: Add a task that ensures the presence of a delegation rule: Add a task that ensures that the Kerberos principal of the web console client service is a member of the constrained delegation rule: Add a task that ensures that the constrained delegation rule is associated with the web-console-delegation-target delegation target: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Enable Kerberos authentication on remote.idm.example.com : SSH to remote.idm.example.com as root . Open the /etc/ssh/sshd_config file for editing. Enable GSSAPIAuthentication by uncommenting the GSSAPIAuthentication no line and replacing it with GSSAPIAuthentication yes . Additional resources Logging in to the web console with smart cards Constrained delegation in Identity Management README-servicedelegationrule.md and README-servicedelegationtarget.md in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/servicedelegationtarget and /usr/share/doc/ansible-freeipa/playbooks/servicedelegationrule directories
[ "klist Ticket cache: FILE:/run/user/1894000001/cockpit-session-3692.ccache Default principal: [email protected] Valid starting Expires Service principal 07/30/21 09:19:06 07/31/21 09:19:06 HTTP/[email protected] 07/30/21 09:19:06 07/31/21 09:19:06 krbtgt/[email protected] for client HTTP/[email protected]", "ipa servicedelegationtarget-add cockpit-target", "ipa servicedelegationtarget-add-member cockpit-target --principals=host/[email protected]", "ipa servicedelegationrule-add cockpit-delegation", "ipa servicedelegationrule-add-member cockpit-delegation --principals=HTTP/[email protected]", "ipa servicedelegationrule-add-target cockpit-delegation --servicedelegationtargets=cockpit-target", "systemctl try-restart sshd.service", "klist Ticket cache: FILE:/run/user/1894000001/cockpit-session-3692.ccache Default principal: [email protected] Valid starting Expires Service principal 07/30/21 09:19:06 07/31/21 09:19:06 HTTP/[email protected] 07/30/21 09:19:06 07/31/21 09:19:06 krbtgt/[email protected] for client HTTP/[email protected]", "cd ~/ MyPlaybooks /", "--- - name: Playbook to create a constrained delegation target hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure servicedelegationtarget web-console-delegation-target is present ipaservicedelegationtarget: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-target", "- name: Ensure servicedelegationtarget web-console-delegation-target member principal host/[email protected] is present ipaservicedelegationtarget: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-target principal: host/[email protected] action: member", "- name: Ensure servicedelegationrule delegation-rule is present ipaservicedelegationrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-rule", "- name: Ensure the Kerberos principal of the web console client service is added to the servicedelegationrule web-console-delegation-rule ipaservicedelegationrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-rule principal: HTTP/myhost.idm.example.com action: member", "- name: Ensure a constrained delegation rule is associated with a specific delegation target ipaservicedelegationrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-rule target: web-console-delegation-target action: member", "ansible-playbook --vault-password-file=password_file -v -i inventory web-console-smart-card-ssh.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_systems_using_the_rhel_8_web_console/managing-remote-systems-in-the-web-console_system-management-using-the-RHEL-8-web-console
1.3. A Look at Managing Certificates (Non-TMS)
1.3. A Look at Managing Certificates (Non-TMS) A conventional PKI environment provides the basic framework to manage certificates stored in software databases. This is a non-TMS environment, since it does not manage certificates on smart cards. At a minimum, a non-TMS requires only a CA, but a non-TMS environment can use OCSP responders and KRA instances as well. For information on this topic, see the following sections in the Red Hat Certificate System Planning, Installation, and Deployment Guide : Managing Certificates Using a Single Certificate Manager Planning for Lost Keys: Key Archival and Recovery Balancing Certificate Request Processing Balancing Client OCSP Requests
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/overview-managing_certificates
Chapter 2. Executive reports
Chapter 2. Executive reports You can download a high-level executive report summarizing the security exposure of your infrastructure. Executive reports are two to three-page PDF files, designed for an executive audience, and include the following information: On page 1 Number of RHEL systems analyzed Number of individual CVEs to which your systems are currently exposed Number of security rules in your infrastructure List of CVEs that have advisories On page 2 Percentage of CVEs by severity (CVSS base score) range Number of CVEs published by 7, 30, and 90 day time frame Top three CVEs in your infrastructure, including security rules and known exploits On page 3 Security rule breakdown by severity Top 3 security rules, including severity and number of exposed systems 2.1. Downloading an executive report Use the following steps to download an executive report for key stakeholders in your security organization: Procedure Navigate to the Security > Vulnerability > Reports tab and log in if necessary. On the Executive report card, click Download PDF . Click Save File and click OK . Verification Verify that the PDF file is in your Downloads folder or other specified location. 2.2. Downloading an executive report using the vulnerability service API You can download an executive report using the vulnerability service API . Request URL: https://console.openshiftusgov.com/api/vulnerability/v1/report/executive Curl: curl -X GET "https://console.openshiftusgov.com/api/vulnerability/v1/report/executive" -H "accept: application/vnd.api+json"
[ "curl -X GET \"https://console.openshiftusgov.com/api/vulnerability/v1/report/executive\" -H \"accept: application/vnd.api+json\"" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_vulnerability_service_reports_with_fedramp/con-vuln-report-exec-report
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named exampleQueue . For more information, see Creating a queue . 3.2. Running your first example The example creates a consumer and producer for a queue named exampleQueue . It sends a text message and then receives it back, printing the received message to the console. Procedure Use Maven to build the examples by running the following command in the <install-dir> /examples/features/standard/queue directory. USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample On Windows: > java -cp "target\classes;target\dependency\*" org.apache.activemq.artemis.jms.example.QueueExample For example, running it on Linux results in the following output: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message The source code for the example is in the <install-dir> /examples/features/standard/queue/src directory. Additional examples are available in the <install-dir> /examples/features/standard directory.
[ "mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample", "> java -cp \"target\\classes;target\\dependency\\*\" org.apache.activemq.artemis.jms.example.QueueExample", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_core_protocol_jms_client/getting_started
8.2. Monitoring and Diagnosing Performance Problems
8.2. Monitoring and Diagnosing Performance Problems Red Hat Enterprise Linux 7 provides a number of tools that are useful for monitoring system performance and diagnosing performance problems related to I/O and file systems and their configuration. This section outlines the available tools and gives examples of how to use them to monitor and diagnose I/O and file system related performance issues. 8.2.1. Monitoring System Performance with vmstat Vmstat reports on processes, memory, paging, block I/O, interrupts, and CPU activity across the entire system. It can help administrators determine whether the I/O subsystem is responsible for any performance issues. The information most relevant to I/O performance is in the following columns: si Swap in, or reads from swap space, in KB. so Swap out, or writes to swap space, in KB. bi Block in, or block write operations, in KB. bo Block out, or block read operations, in KB. wa The portion of the queue that is waiting for I/O operations to complete. Swap in and swap out are particularly useful when your swap space and your data are on the same device, and as indicators of memory usage. Additionally, the free, buff, and cache columns can help identify write-back frequency. A sudden drop in cache values and an increase in free values indicates that write-back and page cache invalidation has begun. If analysis with vmstat shows that the I/O subsystem is responsible for reduced performance, administrators can use iostat to determine the responsible I/O device. vmstat is provided by the procps-ng package. For detailed information about using vmstat , see the man page: 8.2.2. Monitoring I/O Performance with iostat Iostat is provided by the sysstat package. It reports on I/O device load in your system. If analysis with vmstat shows that the I/O subsystem is responsible for reduced performance, you can use iostat to determine the I/O device responsible. You can focus the output of iostat reports on a specific device by using the parameters defined in the iostat man page: 8.2.2.1. Detailed I/O Analysis with blktrace Blktrace provides detailed information about how time is spent in the I/O subsystem. The companion utility blkparse reads the raw output from blktrace and produces a human readable summary of input and output operations recorded by blktrace . For more detailed information about this tool, see the blktrace (8) and blkparse (1) man pages: 8.2.2.2. Analyzing blktrace Output with btt The btt utility is provided as part of the blktrace package. It analyzes blktrace output and displays the amount of time that data spends in each area of the I/O stack, making it easier to spot bottlenecks in the I/O subsystem. Some of the important events tracked by the blktrace mechanism and analyzed by btt are: Queuing of the I/O event ( Q ) Dispatch of the I/O to the driver event ( D ) Completion of I/O event ( C ) You can include or exclude factors involved with I/O performance issues by examining combinations of events. To inspect the timing of sub-portions of each I/O device, look at the timing between captured blktrace events for the I/O device. For example, the following command reports the total amount of time spent in the lower part of the kernel I/O stack ( Q2C ), which includes scheduler, driver, and hardware layers, as an average under await time: If the device takes a long time to service a request ( D2C ), the device may be overloaded, or the workload sent to the device may be sub-optimal. If block I/O is queued for a long time before being dispatched to the storage device ( Q2G ), it may indicate that the storage in use is unable to serve the I/O load. For example, a LUN queue full condition has been reached and is preventing the I/O from being dispatched to the storage device. Looking at the timing across adjacent I/O can provide insight into some types of bottleneck situations. For example, if btt shows that the time between requests being sent to the block layer ( Q2Q ) is larger than the total time that requests spent in the block layer ( Q2C ), this indicates that there is idle time between I/O requests and the I/O subsystem may not be responsible for performance issues. Comparing Q2C values across adjacent I/O can show the amount of variability in storage service time. The values can be either: fairly consistent with a small range, or highly variable in the distribution range, which indicates a possible storage device side congestion issue. For more detailed information about this tool, see the btt (1) man page: 8.2.2.3. Analyzing blktrace Output with iowatcher The iowatcher tool can use blktrace output to graph I/O over time. It focuses on the Logical Block Address (LBA) of disk I/O, throughput in megabytes per second, the number of seeks per second, and I/O operations per second. This can help to identify when you are hitting the operations-per-second limit of a device. For more detailed information about this tool, see the iowatcher (1) man page. 8.2.3. Storage Monitoring with SystemTap The Red Hat Enterprise Linux 7 SystemTap Beginners Guide includes several sample scripts that are useful for profiling and monitoring storage performance. The following SystemTap example scripts relate to storage performance and may be useful in diagnosing storage or file system performance problems. By default they are installed to the /usr/share/doc/systemtap-client/examples/io directory. disktop.stp Checks the status of reading/writing disk every 5 seconds and outputs the top ten entries during that period. iotime.stp Prints the amount of time spent on read and write operations, and the number of bytes read and written. traceio.stp Prints the top ten executables based on cumulative I/O traffic observed, every second. traceio2.stp Prints the executable name and process identifier as reads and writes to the specified device occur. inodewatch.stp Prints the executable name and process identifier each time a read or write occurs to the specified inode on the specified major/minor device. inodewatch2.stp Prints the executable name, process identifier, and attributes each time the attributes are changed on the specified inode on the specified major/minor device.
[ "man vmstat", "man iostat", "man blktrace", "man blkparse", "iostat -x [...] Device: await r_await w_await vda 16.75 0.97 162.05 dm-0 30.18 1.13 223.45 dm-1 0.14 0.14 0.00 [...]", "man btt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Storage_and_File_Systems-Monitoring_and_diagnosing_performance_problems
4.263. raptor
4.263. raptor 4.263.1. RHSA-2012:0410 - Important: raptor security update Updated raptor packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. Raptor provides parsers for Resource Description Framework (RDF) files. Security Fix CVE-2012-0037 An XML External Entity expansion flaw was found in the way Raptor processed RDF files. If an application linked against Raptor were to open a specially-crafted RDF file, it could possibly allow a remote attacker to obtain a copy of an arbitrary local file that the user running the application had access to. A bug in the way Raptor handled external entities could cause that application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. Red Hat would like to thank Timothy D. Morgan of VSR for reporting this issue. All Raptor users are advised to upgrade to these updated packages, which contain a backported patch to correct this issue. All running applications linked against Raptor must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/raptor
Chapter 6. Updating Drivers During Installation on AMD64 and Intel 64 Systems
Chapter 6. Updating Drivers During Installation on AMD64 and Intel 64 Systems In most cases, Red Hat Enterprise Linux already includes drivers for the devices that make up your system. However, if your system contains hardware that has been released very recently, drivers for this hardware might not yet be included. Sometimes, a driver update that provides support for a new device might be available from Red Hat or your hardware vendor on a driver disc that contains RPM packages . Typically, the driver disc is available for download as an ISO image file . Important Driver updates should only be performed if a missing driver prevents you to complete the installation successfully. The drivers included in the kernel should always be preferred over drivers provided by other means. Often, you do not need the new hardware during the installation process. For example, if you use a DVD to install to a local hard drive, the installation will succeed even if drivers for your network card are not available. In such a situation, complete the installation and add support for the new hardware afterward - see Red Hat Enterprise Linux 7 System Administrator's Guide for details of adding this support. In other situations, you might want to add drivers for a device during the installation process to support a particular configuration. For example, you might want to install drivers for a network device or a storage adapter card to give the installation program access to the storage devices that your system uses. You can use a driver disc to add this support during installation in one of two ways: place the ISO image file of the driver disc in a location accessible to the installation program, on a local hard drive, on a USB flash drive, or on a CD or DVD. create a driver disc by extracting the image file onto a CD or a DVD, or a USB flash drive. See the instructions for making installation discs in Section 3.1, "Making an Installation CD or DVD" for more information on burning ISO image files to a CD or DVD, and Section 3.2, "Making Installation USB Media" for instructions on writing ISO images to USB drives. If Red Hat, your hardware vendor, or a trusted third party told you that you will require a driver update during the installation process, choose a method to supply the update from the methods described in this chapter and test it before beginning the installation. Conversely, do not perform a driver update during installation unless you are certain that your system requires it. The presence of a driver on a system for which it was not intended can complicate support. Warning Driver update disks sometimes disable conflicting kernel drivers, where necessary. In rare cases, unloading a kernel module in this way can cause installation errors. 6.1. Limitations of Driver Updates During Installation On UEFI systems with the Secure Boot technology enabled, all drivers being loaded must be signed with a valid certificate, otherwise the system will refuse them. All drivers provided by Red Hat are signed by one of Red Hat's private keys and authenticated by the corresponding Red Hat public key in the kernel. If you load any other drivers (ones not provided on the Red Hat Enterprise Linux installation DVD), you must make sure that they are signed as well. More information about signing custom drivers can be found in the Working with Kernel Modules chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-driver-updates-x86
9.3. libvirt NUMA Tuning
9.3. libvirt NUMA Tuning Generally, optimal performance on NUMA systems is achieved by limiting guest size to the amount of resources on a single NUMA node. Avoid unnecessarily splitting resources across NUMA nodes. Use the numastat tool to view per-NUMA-node memory statistics for processes and the operating system. In the following example, the numastat tool shows four virtual machines with suboptimal memory alignment across NUMA nodes: You can run numad to align the guests' CPUs and memory resources automatically. However, it is highly recommended to configure guest resource alignment using libvirt instead: . To verify that the memory has veen aligned, run numastat -c qemu-kvm again. The following output shows successful resource alignment: Note Running numastat with -c provides compact output; adding the -m option adds system-wide memory information on a per-node basis to the output. Refer to the numastat man page for more information. For optimal performance results, memory pinning should be used in combination with pinning of vCPU threads as well as other hypervisor threads. 9.3.1. NUMA vCPU Pinning vCPU pinning provides similar advantages to task pinning on bare metal systems. Since vCPUs run as user-space tasks on the host operating system, pinning increases cache efficiency. One example of this is an environment where all vCPU threads are running on the same physical socket, therefore sharing a L3 cache domain. Combining vCPU pinning with numatune can avoid NUMA misses. The performance impacts of NUMA misses are significant, generally starting at a 10% performance hit or higher. vCPU pinning and numatune should be configured together. If the virtual machine is performing storage or network I/O tasks, it can be beneficial to pin all vCPUs and memory to the same physical socket that is physically connected to the I/O adapter. Note The lstopo tool can be used to visualize NUMA topology. It can also help verify that vCPUs are binding to cores on the same physical socket. Refer to the following Knowledgebase article for more information on lstopo : https://access.redhat.com/site/solutions/62879 . Important Pinning causes increased complexity when there are many more vCPUs than physical cores. The following example XML configuration has a domain process pinned to physical CPUs 0-7. The vCPU thread is pinned to its own cpuset. For example, vCPU0 is pinned to physical CPU 0, vCPU1 is pinned to physical CPU 1, and so on: <vcpu cpuset='0-7'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> </cputune> There is a direct relationship between the vcpu and vcpupin tags. If a vcpupin option is not specified, the value will be automatically determined and inherited from the parent vcpu tag option. The following configuration shows <vcpupin > for vcpu 5 missing. Hence, vCPU5 would be pinned to physical CPUs 0-7, as specified in the parent tag <vcpu>: <vcpu cpuset='0-7'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> </cputune>
[ "numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51722 (qemu-kvm) 68 16 357 6936 2 3 147 598 8128 51747 (qemu-kvm) 245 11 5 18 5172 2532 1 92 8076 53736 (qemu-kvm) 62 432 1661 506 4851 136 22 445 8116 53773 (qemu-kvm) 1393 3 1 2 12 0 0 6702 8114 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 1769 463 2024 7462 10037 2672 169 7837 32434", "numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51747 (qemu-kvm) 0 0 7 0 8072 0 1 0 8080 53736 (qemu-kvm) 0 0 7 0 0 0 8113 0 8120 53773 (qemu-kvm) 0 0 7 0 0 0 1 8110 8118 59065 (qemu-kvm) 0 0 8050 0 0 0 0 0 8051 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 0 0 8072 0 8072 0 8114 8110 32368", "<vcpu cpuset='0-7'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> </cputune>", "<vcpu cpuset='0-7'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> </cputune>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-numa-numa_and_libvirt
Chapter 16. Deleting applications
Chapter 16. Deleting applications You can delete applications created in your project. 16.1. Deleting applications using the Developer perspective You can delete an application and all of its associated components using the Topology view in the Developer perspective: Click the application you want to delete to see the side panel with the resource details of the application. Click the Actions drop-down menu displayed on the upper right of the panel, and select Delete Application to see a confirmation dialog box. Enter the name of the application and click Delete to delete it. You can also right-click the application you want to delete and click Delete Application to delete it.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/odc-deleting-applications
File System Guide
File System Guide Red Hat Ceph Storage 8 Configuring and Mounting Ceph File Systems Red Hat Ceph Storage Documentation Team
[ "cephadm shell", "ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph fs volume create test --placement=\"2 host01 host02\"", "ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]", "ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64", "ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL", "ceph fs new test cephfs_metadata cephfs_data", "ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mds test --placement=\"2 host01 host02\"", "ceph orch ls", "ceph fs ls ceph fs status", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "touch mds.yaml", "service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3", "service_type: mds service_id: fs_name placement: hosts: - host01 - host02", "cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml", "cd /var/lib/ceph/mds/", "cephadm shell", "cd /var/lib/ceph/mds/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mds.yaml", "ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL", "ceph fs new test metadata_pool data_pool", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "cephadm shell", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it", "ceph fs volume rm cephfs-new --yes-i-really-mean-it", "ceph orch ls", "ceph orch rm SERVICE_NAME", "ceph orch rm mds.test", "ceph orch ps", "ceph orch ps", "ceph fs dump dumped fsmap epoch 399 Filesystem 'cephfs01' (27) e399 max_mds 1 in 0 up {0=20384} failed damaged stopped [mds.a{0:20384} state up:active seq 239 addr [v2:127.0.0.1:6854/966242805,v1:127.0.0.1:6855/966242805]] Standby daemons: [mds.b{-1:10420} state up:standby seq 2 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]]", "ceph config set STANDBY_DAEMON mds_join_fs FILE_SYSTEM_NAME", "ceph config set mds.b mds_join_fs cephfs01", "ceph fs dump dumped fsmap epoch 405 e405 Filesystem 'cephfs01' (27) max_mds 1 in 0 up {0=10420} failed damaged stopped [mds.b{0:10420} state up:active seq 274 join_fscid=27 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]] 1 Standby daemons: [mds.a{-1:10720} state up:standby seq 2 addr [v2:127.0.0.1:6854/1340357658,v1:127.0.0.1:6855/1340357658]]", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 2", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | STANDBY MDS | +-------------+ | node3 | +-------------+", "ceph fs set FS_NAME standby_count_wanted NUMBER", "ceph fs set cephfs standby_count_wanted 2", "ceph fs set FS_NAME allow_standby_replay 1", "ceph fs set cephfs allow_standby_replay 1", "setfattr -n ceph.dir.pin.distributed -v 1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.distributed -v 1 dir1/", "setfattr -n ceph.dir.pin.random -v PERCENTAGE_IN_DECIMAL DIRECTORY_PATH", "setfattr -n ceph.dir.pin.random -v 0.01 dir1/", "getfattr -n ceph.dir.pin.random DIRECTORY_PATH getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH", "getfattr -n ceph.dir.pin.distributed dir1/ file: dir1/ ceph.dir.pin.distributed=\"1\" getfattr -n ceph.dir.pin.random dir1/ file: dir1/ ceph.dir.pin.random=\"0.01\"", "ceph tell mds.a get subtrees | jq '.[] | [.dir.path, .auth_first, .export_pin]'", "setfattr -n ceph.dir.pin.distributed -v 0 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.distributed -v 0 dir1/", "getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH", "getfattr -n ceph.dir.pin.distributed dir1/", "setfattr -n ceph.dir.pin -v -1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin -v -1 dir1/", "mkdir -p a/b 1 setfattr -n ceph.dir.pin -v 1 a/ 2 setfattr -n ceph.dir.pin -v 0 a/b 3", "setfattr -n ceph.dir.pin -v RANK PATH_TO_DIRECTORY", "setfattr -n ceph.dir.pin -v 2 cephfs/home", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | +-------------+", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 1", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------|--------+ +-----------------+----------+-------+-------+ | POOl | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+", "ceph orch ps | grep mds", "ceph tell MDS_SERVICE_NAME counter dump", "ceph tell mds.cephfs.ceph2-hk-n-0mfqao-node4.isztbk counter dump [ { \"key\": \"mds_client_metrics\", \"value\": [ { \"labels\": { \"fs_name\": \"cephfs\", \"id\": \"24379\" }, \"counters\": { \"num_clients\": 4 } } ] }, { \"key\": \"mds_client_metrics-cephfs\", \"value\": [ { \"labels\": { \"client\": \"client.24413\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } }, { \"labels\": { \"client\": \"client.24502\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 921403, \"cap_miss\": 102382, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17117, \"dentry_lease_miss\": 204710, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24508\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 928694, \"cap_miss\": 103183, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17217, \"dentry_lease_miss\": 206348, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24520\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } } ] } ]", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rwp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a client.1 key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ KEYRING_FILE /etc/ceph/", "scp [email protected]:/etc/ceph/ceph.client.1.keyring /etc/ceph/", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "scp [email protected]:/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "ceph fs volume create FILE_SYSTEM_NAME", "ceph fs volume create cephfs01", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS", "ceph fs authorize cephfs01 client.1 / rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 exported keyring for client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph fs authorize cephfs01 client.1 / rw root_squash /volumes rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01 root_squash, allow rw fsname=cephfs01 path=/volumes\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME :/etc/ceph", "ceph auth get client.1 > ceph.client.1.keyring exported keyring for client.1 scp ceph.client.1.keyring client:/etc/ceph root@client's password: ceph.client.1.keyring 100% 178 333.0KB/s 00:00", "mkdir PATH_TO_NEW_DIRECTORY_NAME", "mkdir /mnt/mycephfs", "ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAME", "ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuse", "ceph osd pool create DATA_POOL_NAME erasure", "ceph osd pool create cephfs-data-ec01 erasure pool 'cephfs-data-ec01' created", "ceph osd lspools", "ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true", "ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true set pool 15 allow_ec_overwrites to true", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME", "ceph fs add_data_pool cephfs-ec cephfs-data-ec01", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T cephfs-data-ec01 data 0 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY", "mkdir /mnt/cephfs/newdir setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir", "cephadm shell", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ] PERMISSIONS", "ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==", "ceph fs authorize cephfs_a client.1 /temp rw", "ceph auth get client. ID", "ceph auth get client.1 client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps mds = \"allow r, allow rw path=/temp\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph auth get client. ID -o ceph.client. ID .keyring", "ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "chmod 644 ceph.client. ID .keyring", "chmod 644 /etc/ceph/ceph.client.1.keyring", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-common", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "mount -t ceph MONITOR-1_NAME :6789, MONITOR-2_NAME :6789, MONITOR-3_NAME :6789:/ MOUNT_POINT -o name= CLIENT_ID ,fs= FILE_SYSTEM_NAME", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "#DEVICE PATH TYPE OPTIONS MON_0_HOST : PORT , MOUNT_POINT ceph name= CLIENT_ID , MON_1_HOST : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , fs= FILE_SYSTEM_NAME , MON_2_HOST : PORT :/q[_VOL_]/ SUB_VOL / UID_SUB_VOL , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatime", "subscription-manager repos --enable=6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINT", "ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfs", "ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID MOUNT_POINT -r PATH", "ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs", "ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=true", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME : PORT , MOUNT_POINT fuse.ceph ceph.id= CLIENT_ID , 0 0 HOST_NAME : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , HOST_NAME : PORT :/ ceph.client_fs= FILE_SYSTEM_NAME ,ceph.name= USERNAME ,ceph.keyring=/etc/ceph/ KEYRING_FILE , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaults", "ceph fs volume create VOLUME_NAME", "ceph fs volume create cephfs", "ceph fs volume ls", "ceph fs volume info VOLUME_NAME", "ceph fs volume info cephfs { \"mon_addrs\": [ \"192.168.1.7:40977\", ], \"pending_subvolume_deletions\": 0, \"pools\": { \"data\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.data\", \"used\": 4096 } ], \"metadata\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.meta\", \"used\": 155648 } ] }, \"used_size\": 0 }", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]", "ceph fs volume rm cephfs --yes-i-really-mean-it", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subgroup0", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--size SIZE_IN_BYTES ] [--pool_layout DATA_POOL_NAME ] [--uid UID ] [--gid GID ] [--mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subvolgroup_2 10737418240", "ceph fs subvolumegroup resize VOLUME_NAME GROUP_NAME new_size [--no_shrink]", "ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 [ { \"bytes_used\": 10768679044 }, { \"bytes_quota\": 20737418240 }, { \"bytes_pcent\": \"51.93\" } ]", "ceph fs subvolumegroup info VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup info cephfs subvolgroup_2 { \"atime\": \"2022-10-05 18:00:39\", \"bytes_pcent\": \"51.85\", \"bytes_quota\": 20768679043, \"bytes_used\": 10768679044, \"created_at\": \"2022-10-05 18:00:39\", \"ctime\": \"2022-10-05 18:21:26\", \"data_pool\": \"cephfs.cephfs.data\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"60.221.178.236:1221\", \"205.64.75.112:1221\", \"20.209.241.242:1221\" ], \"mtime\": \"2022-10-05 18:01:25\", \"uid\": 0 }", "ceph fs subvolumegroup ls VOLUME_NAME", "ceph fs subvolumegroup ls cephfs", "ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup getpath cephfs subgroup0", "ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup snapshot ls cephfs subgroup0", "ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]", "ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force", "ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]", "ceph fs subvolumegroup rm cephfs subgroup0 --force", "ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ] [--namespace-isolated]", "ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated", "ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume ls cephfs --group_name subgroup0", "ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME ] [--no_shrink]", "ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink", "ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name _SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume getpath cephfs sub0 --group_name subgroup0", "ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume info cephfs sub0 --group_name subgroup0", "ceph fs subvolume info cephfs sub0 { \"atime\": \"2023-07-14 08:52:46\", \"bytes_pcent\": \"0.00\", \"bytes_quota\": 1024000000, \"bytes_used\": 0, \"created_at\": \"2023-07-14 08:52:46\", \"ctime\": \"2023-07-14 08:53:54\", \"data_pool\": \"cephfs.cephfs.data\", \"features\": [ \"snapshot-clone\", \"snapshot-autoprotect\", \"snapshot-retention\" ], \"flavor\": \"2\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"10.0.208.172:6789\", \"10.0.211.197:6789\", \"10.0.209.212:6789\" ], \"mtime\": \"2023-07-14 08:52:46\", \"path\": \"/volumes/_nogroup/sub0/834c5cbc-f5db-4481-80a3-aca92ff0e7f3\", \"pool_namespace\": \"\", \"state\": \"complete\", \"type\": \"subvolume\", \"uid\": 0 }", "ceph auth get CLIENT_NAME", "ceph auth get client.0 [client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" 1 caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\" 2", "ceph fs subvolume snapshot create VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME ]", "ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0", "CLIENT_NAME key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = allow rw, allow rws path= DIRECTORY_PATH caps mon = allow r caps osd = allow rw tag cephfs data= DIRECTORY_NAME", "[client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph fs volume create VOLUME_NAME", "ceph fs volume create cephfs", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subgroup0", "ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolume create cephfs sub0 --group_name subgroup0", "ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --group_name SUBVOLUME_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --target_group_name SUBVOLUME_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1", "ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME ]", "ceph fs clone status cephfs clone0 --group_name subgroup1 { \"status\": { \"state\": \"complete\" } }", "ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0", "ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0", "{ \"created_at\": \"2022-05-09 06:18:47.330682\", \"data_pool\": \"cephfs_data\", \"has_pending_clones\": \"no\", \"size\": 0 }", "ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ] [--force] [--retain-snapshots]", "ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain-snapshots", "ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0", "ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME --force]", "ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force", "ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata set cephfs sub0 test_meta cluster --group_name subgroup0", "ceph fs subvolume metadata set cephfs sub0 \"test meta\" cluster --group_name subgroup0", "ceph fs subvolume metadata set cephfs sub0 \"test_meta\" cluster2 --group_name subgroup0", "ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata get cephfs sub0 test_meta --group_name subgroup0 cluster", "ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata ls cephfs sub0 { \"test_meta\": \"cluster\" }", "ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata rm cephfs sub0 test_meta --group_name subgroup0", "ceph fs subvolume metadata ls cephfs sub0 {}", "subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms", "dnf install cephfs-top", "ceph mgr module enable stats", "ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r' > /etc/ceph/ceph.client.fstop.keyring", "cephfs-top cephfs-top - Wed Nov 30 15:26:05 2022 All Filesystem Info Total Client(s): 4 - 3 FUSE, 1 kclient, 0 libcephfs COMMANDS: m - select a filesystem | s - sort menu | l - limit number of clients | r - reset to default | q - quit client_id mount_root chit(%) dlease(%) ofiles oicaps oinodes rtio(MB) raio(MB) rsp(MB/s) wtio(MB) waio(MB) wsp(MB/s) rlatavg(ms) rlatsd(ms) wlatavg(ms) wlatsd(ms) mlatavg(ms) mlatsd(ms) mount_point@host/addr Filesystem: cephfs1 - 2 client(s) 4500 / 100.0 100.0 0 751 0 0.0 0.0 0.0 578.13 0.03 0.0 N/A N/A N/A N/A N/A N/A N/A@example/192.168.1.4 4501 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.41 0.0 /mnt/cephfs2@example/192.168.1.4 Filesystem: cephfs2 - 2 client(s) 4512 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.0 /mnt/cephfs3@example/192.168.1.4 4518 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.52 0.0 /mnt/cephfs4@example/192.168.1.4", "m Filesystems Press \"q\" to go back to home (all filesystem info) screen cephfs01 cephfs02 q cephfs-top - Thu Oct 20 07:29:35 2022 Total Client(s): 3 - 2 FUSE, 1 kclient, 0 libcephfs", "cephfs-top --selftest selftest ok", "ceph mgr module enable mds_autoscaler", "umount MOUNT_POINT", "umount /mnt/cephfs", "fusermount -u MOUNT_POINT", "fusermount -u /mnt/cephfs", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ]", "[user@client ~]USD ceph fs authorize cephfs_a client.1 /temp rwp client.1 key: AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps: [mds] allow r, allow rwp path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "setfattr -n ceph.dir.pin -v RANK DIRECTORY", "[user@client ~]USD setfattr -n ceph.dir.pin -v 2 /temp", "setfattr -n ceph.dir.pin -v -1 DIRECTORY", "[user@client ~]USD setfattr -n ceph.dir.pin -v -1 /home/ceph-user", "ceph osd pool create POOL_NAME", "ceph osd pool create cephfs_data_ssd pool 'cephfs_data_ssd' created", "ceph fs add_data_pool FS_NAME POOL_NAME", "ceph fs add_data_pool cephfs cephfs_data_ssd added data pool 6 to fsmap", "ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd]", "ceph fs rm_data_pool FS_NAME POOL_NAME", "ceph fs rm_data_pool cephfs cephfs_data_ssd removed data pool 6 from fsmap", "ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs.cephfs.data]", "ceph fs set FS_NAME down true", "ceph fs set cephfs down true", "ceph fs set FS_NAME down false", "ceph fs set cephfs down false", "ceph fs fail FS_NAME", "ceph fs fail cephfs", "ceph fs set FS_NAME joinable true", "ceph fs set cephfs joinable true cephfs marked joinable; MDS may join as newly active.", "ceph fs set FS_NAME down true", "ceph fs set cephfs down true cephfs marked down.", "ceph fs status", "ceph fs status cephfs - 0 clients ====== +-------------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+------------+-------+-------+ |cephfs.cephfs.meta | metadata | 31.5M | 52.6G| |cephfs.cephfs.data | data | 0 | 52.6G| +-----------------+----------+-------+---------+ STANDBY MDS cephfs.ceph-host01 cephfs.ceph-host02 cephfs.ceph-host03", "ceph fs rm FS_NAME --yes-i-really-mean-it", "ceph fs rm cephfs --yes-i-really-mean-it", "ceph fs ls", "ceph mds fail MDS_NAME", "ceph mds fail example01", "fs required_client_features FILE_SYSTEM_NAME add FEATURE_NAME fs required_client_features FILE_SYSTEM_NAME rm FEATURE_NAME", "ceph tell DAEMON_NAME client ls", "ceph tell mds.0 client ls [ { \"id\": 4305, \"num_leases\": 0, \"num_caps\": 3, \"state\": \"open\", \"replay_requests\": 0, \"completed_requests\": 0, \"reconnecting\": false, \"inst\": \"client.4305 172.21.9.34:0/422650892\", \"client_metadata\": { \"ceph_sha1\": \"79f0367338897c8c6d9805eb8c9ad24af0dcd9c7\", \"ceph_version\": \"ceph version 16.2.8-65.el8cp (79f0367338897c8c6d9805eb8c9ad24af0dcd9c7)\", \"entity_id\": \"0\", \"hostname\": \"senta04\", \"mount_point\": \"/tmp/tmpcMpF1b/mnt.0\", \"pid\": \"29377\", \"root\": \"/\" } } ]", "ceph tell DAEMON_NAME client evict id= ID_NUMBER", "ceph tell mds.0 client evict id=4305", "ceph osd blocklist ls listed 1 entries 127.0.0.1:0/3710147553 2022-05-09 11:32:24.716146", "ceph osd blocklist rm CLIENT_NAME_OR_IP_ADDR", "ceph osd blocklist rm 127.0.0.1:0/3710147553 un-blocklisting 127.0.0.1:0/3710147553", "recover_session=clean", "client_reconnect_stale=true", "getfattr -n ceph.quota.max_bytes DIRECTORY", "getfattr -n ceph.quota.max_bytes /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_bytes=\"100000000\"", "getfattr -n ceph.quota.max_files DIRECTORY", "getfattr -n ceph.quota.max_files /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_files=\"10000\"", "setfattr -n ceph.quota.max_bytes -v LIMIT_VALUE DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 2T /cephfs/", "setfattr -n ceph.quota.max_files -v LIMIT_VALUE DIRECTORY", "setfattr -n ceph.quota.max_files -v 10000 /cephfs/", "setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 0 /mnt/cephfs/", "setfattr -n ceph.quota.max_files -v 0 DIRECTORY", "setfattr -n ceph.quota.max_files -v 0 /mnt/cephfs/", "setfattr -n ceph. TYPE .layout. FIELD -v VALUE PATH", "setfattr -n ceph.file.layout.stripe_unit -v 1048576 test", "getfattr -n ceph. TYPE .layout PATH", "getfattr -n ceph.dir.layout /home/test ceph.dir.layout=\"stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data\"", "getfattr -n ceph. TYPE .layout. FIELD _PATH", "getfattr -n ceph.file.layout.pool test ceph.file.layout.pool=\"cephfs_data\"", "setfattr -x ceph.dir.layout DIRECTORY_PATH", "[user@client ~]USD setfattr -x ceph.dir.layout /home/cephfs", "setfattr -x ceph.dir.layout.pool_namespace DIRECTORY_PATH", "[user@client ~]USD setfattr -x ceph.dir.layout.pool_namespace /home/cephfs", "cephadm shell", "ceph fs set FILE_SYSTEM_NAME allow_new_snaps true", "ceph fs set cephfs01 allow_new_snaps true", "mkdir NEW_DIRECTORY_PATH", "mkdir /.snap/new-snaps", "rmdir NEW_DIRECTORY_PATH", "rmdir /.snap/new-snaps", "cephadm shell", "ceph mgr module enable snap_schedule", "cephadm shell", "ceph fs snap-schedule add FILE_SYSTEM_VOLUME_PATH REPEAT_INTERVAL [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME", "ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs mycephfs", "ceph fs snap-schedule retention add FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention add /cephfs h 14 1 ceph fs snap-schedule retention add /cephfs d 4 2 ceph fs snap-schedule retention add /cephfs 14h4w 3", "ceph fs snap-schedule list FILE_SYSTEM_VOLUME_PATH [--format=plain|json] [--recursive=true]", "ceph fs snap-schedule list /cephfs --recursive=true", "ceph fs snap-schedule status FILE_SYSTEM_VOLUME_PATH [--format=plain|json]", "ceph fs snap-schedule status /cephfs --format=json", "ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME SUBVOLUME_GROUP_NAME", "ceph fs subvolume getpath cephfs subvol_1 subvolgroup_1", "ceph fs snap-schedule add SUBVOLUME_DIR_PATH SNAP_SCHEDULE [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs cephfs --subvol subvol_1 Schedule set for path /..", "ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME", "ceph fs snap-schedule add - 2M --subvol sv_non_def_1", "ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME", "ceph fs snap-schedule add - 2M --fs cephfs --subvol sv_non_def_1 --group svg1", "ceph fs snap-schedule retention add SUBVOLUME_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 14 1 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. d 4 2 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14h4w 3 Retention added to path /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..", "ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched Retention added to path /volumes/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME", "ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention added to path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a54j0dda7f16/..", "ceph fs snap-schedule list SUBVOLUME_VOLUME_PATH [--format=plain|json] [--recursive=true]", "ceph fs snap-schedule list / --recursive=true /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h", "ceph fs snap-schedule status SUBVOLUME_DIR_PATH [--format=plain|json]", "ceph fs snap-schedule status /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. --format=json {\"fs\": \"cephfs\", \"subvol\": \"subvol_1\", \"path\": \"/volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..\", \"rel_path\": \"/..\", \"schedule\": \"4h\", \"retention\": {\"h\": 14}, \"start\": \"2022-05-16T14:00:00\", \"created\": \"2023-03-20T08:47:18\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule status --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule status --fs cephfs --subvol sv_sched {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule status --fs _CEPH_FILE_SYSTEM_NAME_ --subvol _SUBVOLUME_NAME_ --group _NON-DEFAULT_SUBVOLGROUP_NAME_", "ceph fs snap-schedule status --fs cephfs --subvol sv_sched --group subvolgroup_cg {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e564329a-kj87-4763-gh0y-b56c8sev7t23/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule activate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule activate /cephfs", "ceph fs snap-schedule activate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule activate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..", "ceph fs snap-schedule activate /.. REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule activate /.. [ REPEAT_INTERVAL ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule deactivate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule deactivate /cephfs 1d", "ceph fs snap-schedule deactivate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule deactivate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 1d", "ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ] [ START_TIME ]", "ceph fs snap-schedule remove /cephfs 4h 2022-05-16T14:00:00", "ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH", "ceph fs snap-schedule remove /cephfs", "ceph fs snap-schedule remove SUBVOL_DIR_PATH [ REPEAT_INTERVAL ] [ START_TIME ]", "ceph fs snap-schedule remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h 2022-05-16T14:00:00", "ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention remove FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention remove /cephfs h 4 1 ceph fs snap-schedule retention remove /cephfs 14d4w 2", "ceph fs snap-schedule retention remove SUBVOL_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 4 1 ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14d4w 2", "ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "cephadm shell", "ceph orch apply cephfs-mirror [\" NODE_NAME \"]", "ceph orch apply cephfs-mirror \"node1.example.com\" Scheduled cephfs-mirror update", "ceph orch apply cephfs-mirror --placement=\" PLACEMENT_SPECIFICATION \"", "ceph orch apply cephfs-mirror --placement=\"3 host1 host2 host3\" Scheduled cephfs-mirror update", "Error EINVAL: name component must include only a-z, 0-9, and -", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps", "ceph fs authorize cephfs client.mirror_remote / rwps [client.mirror_remote] key = AQCjZ5Jg739AAxAAxduIKoTZbiFJ0lgose8luQ==", "ceph mgr module enable mirroring", "ceph fs snapshot mirror enable FILE_SYSTEM_NAME", "ceph fs snapshot mirror enable cephfs", "ceph fs snapshot mirror disable FILE_SYSTEM_NAME", "ceph fs snapshot mirror disable cephfs", "ceph mgr module enable mirroring", "ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME", "ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site {\"token\": \"eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==\"}", "ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN", "ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==", "ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME", "ceph fs snapshot mirror peer_list cephfs {\"e5ecb883-097d-492d-b026-a585d1d7da79\": {\"client_name\": \"client.mirror_remote\", \"site_name\": \"remote-site\", \"fs_name\": \"cephfs\", \"mon_host\": \"[v2:10.0.211.54:3300/0,v1:10.0.211.54:6789/0] [v2:10.0.210.56:3300/0,v1:10.0.210.56:6789/0] [v2:10.0.210.65:3300/0,v1:10.0.210.65:6789/0]\"}}", "ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID", "ceph fs snapshot mirror peer_remove cephfs e5ecb883-097d-492d-b026-a585d1d7da79", "ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1", "ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror remove cephfs /home/user1", "cephadm shell", "ceph fs snapshot mirror daemon status", "ceph fs snapshot mirror daemon status [ { \"daemon_id\": 15594, \"filesystems\": [ { \"filesystem_id\": 1, \"name\": \"cephfs\", \"directory_count\": 1, \"peers\": [ { \"uuid\": \"e5ecb883-097d-492d-b026-a585d1d7da79\", \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" }, \"stats\": { \"failure_count\": 1, \"recovery_count\": 0 } } ] } ] } ]", "ceph --admin-daemon PATH_TO_THE_ASOK_FILE help", "ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok help { \"fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e\": \"get peer mirror status\", \"fs mirror status cephfs@11\": \"get filesystem mirror status\", }", "ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME @_FILE_SYSTEM_ID", "ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok fs mirror status cephfs@11 { \"rados_inst\": \"192.168.0.5:0/1476644347\", \"peers\": { \"1011435c-9e30-4db6-b720-5bf482006e0e\": { 1 \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" } } }, \"snap_dirs\": { \"dir_count\": 1 } }", "ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME @ FILE_SYSTEM_ID PEER_UUID", "ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e { \"/home/user1\": { \"state\": \"idle\", 1 \"last_synced_snap\": { \"id\": 120, \"name\": \"snap1\", \"sync_duration\": 0.079997898999999997, \"sync_time_stamp\": \"274900.558797s\" }, \"snaps_synced\": 2, 2 \"snaps_deleted\": 0, 3 \"snaps_renamed\": 0 } }", "ceph fs snapshot mirror dirmap FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"instance_id\": \"25184\", 1 \"last_shuffled\": 1661162007.012663, \"state\": \"mapped\" }", "ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"reason\": \"no mirror daemons running\", \"state\": \"stalled\" 1 }", "ceph --admin-daemon ASOK_FILE_NAME counter dump", "ceph --admin-daemon ceph-client.cephfs-mirror.ceph1-hk-n-0mfqao-node7.pnbrlu.2.93909288073464.asok counter dump [ { \"key\": \"cephfs_mirror\", \"value\": [ { \"labels\": {}, \"counters\": { \"mirrored_filesystems\": 1, \"mirror_enable_failures\": 0 } } ] }, { \"key\": \"cephfs_mirror_mirrored_filesystems\", \"value\": [ { \"labels\": { \"filesystem\": \"cephfs\" }, \"counters\": { \"mirroring_peers\": 1, \"directory_count\": 1 } } ] }, { \"key\": \"cephfs_mirror_peers\", \"value\": [ { \"labels\": { \"peer_cluster_filesystem\": \"cephfs\", \"peer_cluster_name\": \"remote_site\", \"source_filesystem\": \"cephfs\", \"source_fscid\": \"1\" }, \"counters\": { \"snaps_synced\": 1, \"snaps_deleted\": 0, \"snaps_renamed\": 0, \"sync_failures\": 0, \"avg_sync_time\": { \"avgcount\": 1, \"sum\": 4.216959457, \"avgtime\": 4.216959457 }, \"sync_bytes\": 132 } } ] } ]" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/file_system_guide/management-of-ceph-file-system-volumes-subvolume-groups-and-subvolumes
Chapter 9. Logging
Chapter 9. Logging Logging is important in troubleshooting and debugging. By default logging is turned off. To enable logging, you must set a logging level and provide a delegate function to receive the log messages. 9.1. Setting the log output level The library emits log traces at different levels: Error Warning Information Verbose The lowest log level, Error , traces only error events and produces the fewest log messages. A higher log level includes all the log levels below it and generates a larger volume of log messages. 9.2. Enabling protocol logging The Log level Frame is handled differently. Setting trace level Frame enables tracing outputs for AMQP protocol headers and frames. Logical OR must be used to get normal tracing output and AMQP frame tracing at the same time. For example The following code writes AMQP frames to the console. Example: Logging delegate Trace.TraceLevel = TraceLevel.Frame; Trace.TraceListener = (f, a) => Console.WriteLine( DateTime.Now.ToString("[hh:mm:ss.fff]") + " " + string.Format(f, a));
[ "// Enable Error logs only. Trace.TraceLevel = TraceLevel.Error", "// Enable Verbose logs. This includes logs at all log levels. Trace.TraceLevel = TraceLevel.Verbose", "// Enable just AMQP frame tracing Trace.TraceLevel = TraceLevel.Frame;", "// Enable AMQP Frame logs, and Warning and Error logs Trace.TraceLevel = TraceLevel.Frame | TraceLevel.Warning;", "Trace.TraceLevel = TraceLevel.Frame; Trace.TraceListener = (f, a) => Console.WriteLine( DateTime.Now.ToString(\"[hh:mm:ss.fff]\") + \" \" + string.Format(f, a));" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_proton_dotnet/1.0/html/using_qpid_proton_dotnet/logging
Chapter 9. Installation configuration parameters for IBM Power Virtual Server
Chapter 9. Installation configuration parameters for IBM Power Virtual Server Before you deploy an OpenShift Container Platform on IBM Power(R) Virtual Server, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 9.1. Available installation configuration parameters for IBM Power Virtual Server The following tables specify the required, optional, and IBM Power Virtual Server-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } The UserID is the login for the user's IBM Cloud(R) account. String. For example, existing_user_id . The PowerVSResourceGroup is the resource group in which IBM Power(R) Virtual Server resources are created. If using an existing VPC, the existing VPC and subnets should be in this resource group. String. For example, existing_resource_group . Specifies the IBM Cloud(R) colo region where the cluster will be created. String. For example, existing_region . Specifies the IBM Cloud(R) colo region where the cluster will be created. String. For example, existing_zone . 9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 9.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 192.168.0.0/24 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled The SMTLevel specifies the level of SMT to set to the control plane and compute machines. Valid values are 1, 2, 3, 4, 5, 6, 7, 8, off , and on . String Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. Example usage, compute.platform.powervs.sysType . aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. Example usage, controlPlane.platform.powervs.processors . aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Specifies the IBM Cloud(R) region in which to create VPC resources. String. For example, existing_vpc_region . Specifies existing subnets (by name) where cluster resources will be created. String. For example, powervs_region_example_subnet . Specifies the IBM Cloud(R) name. String. For example, existing_vpcName . Specifies the ID of the Power IAAS instance created from the IBM Cloud(R) Catalog. String. For example, existing_service_instance_GUID . Specifies a pre-created IBM Power(R) Virtual Server boot image that overrides the default image for cluster nodes. String. For example, existing_cluster_os_image . Specifies the default configuration used when installing on IBM Power(R) Virtual Server for machine pools that do not define their own platform configuration. String. For example, existing_machine_platform . Specifies the size of a virtual machine's memory, in GB. The valid integer must be an integer number of GB that is at least 2 and no more than 64, depending on the machine type. Defines the processor sharing model for the instance. The valid values are Capped, Dedicated, and Shared. Defines the processing units for the instance. The number of processors must be from .5 to 32 cores. The processors must be in increments of .25. Defines the system type for the instance. The system type must be e980 , s922 , e1080 , or s1022 . The available system types depend on the zone you want to target.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "platform: powervs: userID:", "platform: powervs: powervsResourceGroup:", "platform: powervs: region:", "platform: powervs: zone:", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: smtLevel:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "platform: powervs: vpcRegion:", "platform: powervs: vpcSubnets:", "platform: powervs: vpcName:", "platform: powervs: serviceInstanceGUID:", "platform: powervs: clusterOSImage:", "platform: powervs: defaultMachinePlatform:", "platform: powervs: memoryGiB:", "platform: powervs: procType:", "platform: powervs: processors:", "platform: powervs: sysType:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_power_virtual_server/installation-config-parameters-ibm-power-vs
Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform
Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform Red Hat OpenShift Data Foundation 4.9 Instructions on deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). Important Deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/index
Server Developer Guide
Server Developer Guide Red Hat build of Keycloak 22.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_developer_guide/index
Applications
Applications Red Hat Advanced Cluster Management for Kubernetes 2.12 Application management
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/applications/index
Chapter 1. Managing directory entries using the command line
Chapter 1. Managing directory entries using the command line You can add, edit, rename, and delete an LDAP entry using the command line. 1.1. Providing input to the ldapadd, ldapmodify, and ldapdelete utilities When you add, update, or delete entries or attributes in the directory, you can either use the interactive mode of the utilities to enter LDAP Data Interchange Format (LDIF) statements or pass an LDIF file to them. 1.1.1. The interactive mode of OpenLDAP client utilities In the interactive mode, the ldapadd , ldapmodify , and ldapdelete utilities read the input from the command line. To exit the interactive mode, press the Ctrl + D (^D) key combination to send the end-of-file (EOF) escape sequence. In interactive mode, the utility sends the statements to the LDAP server when you press Enter twice or when you send the EOF sequence. Use the interactive mode: To enter LDAP Data Interchange Format (LDIF) statements without creating a file. Example 1.1. Using the ldapmodify interactive mode to enter LDIF statements The following example runs ldapmodify in interactive mode, deletes the telephoneNumber attribute, and adds the manager attribute with the cn=manager_name,ou=people,dc=example,dc=com value to the uid=user,ou=people,dc=example,dc=com entry. Press Ctrl+D after the last statement to exit the interactive mode. # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=people,dc=example,dc=com changetype: modify delete: telephoneNumber - add: manager manager: cn=manager_name,ou=people,dc=example,dc=com modifying entry "uid=user,ou=people,dc=example,dc=com" ^D To redirect LDIF statements, outputted by an another command, to the server: Example 1.2. Using the ldapmodify interactive mode with redirected content The following example redirects the output of the command_that_outputs_LDIF command to ldapmodify . The interactive mode exits automatically after the redirected command exits. # command_that_outputs_LDIF | ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x Additional resources ldif(5) man page 1.1.2. The file mode of OpenLDAP client utilities In the interactive mode, the ldapadd , ldapmodify , and ldapdelete utilities read the LDAP Data Interchange Format (LDIF) statements from a file. Use this mode to send a larger number of LDIF statements to the server. Example 1.3. Passing a File with LDIF Statements to ldapmodify Create a file with the LDIF statements. For example, create the ~/example.ldif file with the following statements: dn: uid=user,ou=people,dc=example,dc=com changetype: modify delete: telephoneNumber - add: manager manager: cn=manager_name,ou=people,dc=example,dc=com This example deletes the telephoneNumber attribute and to adds the manager attribute with the cn=manager_name,ou=people,dc=example,dc=com value to the uid=user,ou=people,dc=example,dc=com entry. Pass the file to the ldapmodify command using the -f parameter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x -f ~/example.ldif Additional resources ldif(5) man page 1.1.3. The continuous operation mode of OpenLDAP client utilities By default, if you send multiple LDAP Data Interchange Format (LDIF) statements to the server and one operation fails, the process stops. However, entries processed before the error occurred were successfully added, modified, or deleted. To ignore errors and continue processing further LDIF statements in a batch, pass the -c parameter to ldapadd and ldapmodify : # ldpamodify -c -D " cn=Directory Manager " -W -H ldap://server.example.com -x 1.2. Adding an LDAP entry using the command line To add a new entry to the directory, use the ldapadd or ldapmodify utility. Note that /bin/ldapadd is a symbolic link to /bin/ldapmodify . Therefore, ldapadd performs the same operation as ldapmodify -a . Note You can only add a new directory entry if the parent entry already exists. For example, you cannot add cn=user,ou=people,dc=example,dc=com , if the ou=people,dc=example,dc=com parent entry does not exist. 1.2.1. Adding an entry using ldapadd To use the ldapadd utility to add, for example, the cn=user,ou=people,dc=example,dc=com user entry, enter: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com uid: user givenName: given_name objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetorgperson sn: surname cn: user Note Running ldapadd automatically performs a changetype: add operation. Therefore, you do not need to specify changetype: add in the LDIF statement. Additional resources ldapadd(1) man page 1.2.2. Adding an entry using ldapmodify To use the ldapmodify utility to add, for example, the cn=user,ou=people,dc=example,dc=com user entry, enter: # ldapmodify -a -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com uid: user givenName: given_name objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetorgperson sn: surname cn: user Note When passing the -a parameter to the ldapmodify command, the utility automatically performs a changetype: add operation. Therefore, you do not need to specify changetype: add in the LDIF statement. Additional resources ldapmodify(1) man page 1.2.3. Creating a root entry of a database suffix To create the root entry of a database suffix, such as dc=example,dc=com , bind as the cn=Directory Manager user and add the entry. The distinguished name (DN) corresponds to the DN of the root or sub-suffix of the database. For example, to add the dc=example,dc=com suffix, enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: dc=example,dc=com changetype: add objectClass: top objectClass: domain dc: example Note You can add root objects only if you have one database per suffix. If you create a suffix that is stored in several databases, you must use the dsctl ldif2db command to set the database that will hold the new entries. Additional resources Importing data using the command line while the server is offline 1.3. Updating an LDAP entry using the command line When you modify a directory entry, use the changetype: modify statement. Depending on the change operation, you can add, change, or delete attributes from the entry. 1.3.1. Adding attributes to an LDAP entry To add an attribute to an LDAP entry, use the add operation. For example, to add the telephoneNumber attribute with the 555-1234567 value to the uid=user,ou=People,dc=example,dc=com entry, enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: telephoneNumber telephoneNumber: 555-1234567 If an attribute is multi-valued, you can specify the attribute name multiple times to add all the values in a single operation. For example, to add two telephoneNumber attributes at once to the uid=user,ou=People,dc=example,dc=com , enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: telephoneNumber telephoneNumber: 555-1234567 telephoneNumber: 555-7654321 1.3.2. Updating the value of an attribute The procedure for updating an attribute's value depends on whether the attribute is single-valued or multi-valued: Updating a single-value attribute: When updating a single-value attribute, use the replace operation to override the existing value. The following command updates the manager attribute of the uid=user,ou=People,dc=example,dc=com entry: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify replace: manager manager: uid=manager_name,ou=People,dc=example,dc=com Updating a specific value of a multi-value attribute: To update a specific value of a multi-value attribute, first delete the entry you want to replace, and then add the new value. The following command updates only the telephoneNumber attribute that is currently set to 555-1234567 in the uid=user,ou=People,dc=example,dc=com entry: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: telephoneNumber telephoneNumber: 555-1234567 - add: telephoneNumber telephoneNumber: 555-9876543 1.3.3. Deleting attributes from an entry To delete an attribute from an entry, use the delete operation: Deleting an attribute: For example, to delete the manager attribute from the uid=user,ou=People,dc=example,dc=com entry, enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: manager Important If the attribute contains multiple values, this operation deletes all of them. Deleting a specific value of a multi-value attribute: If you want to delete a specific value from a multi-value attribute, list the attribute and its value in the LDAP Data Interchange Format (LDIF) statement. For example, to delete only the telephoneNumber attribute that is set to 555-1234567 from the uid=user,ou=People,dc=example,dc=com entry, enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: telephoneNumber telephoneNumber: 555-1234567 1.4. Renaming and moving an LDAP entry The following rename operations exist: Renaming an entry If you rename an entry, the modrdn operation changes the relative distinguished name (RDN) of the entry: Renaming a subentry For subtree entries, the modrdn operation renames the subtree and also the DN components of child entries: Note that for large subtrees, this process can take a lot of time and resources. Moving an entry to a new parent A similar action to renaming a subtree is moving an entry from one subtree to another. This is an expanded type of the modrdn operation, which simultaneously renames the entry and sets a newSuperior attribute which moves the entry from one parent to another: 1.4.1. Considerations for renaming LDAP entries Keep the following in mind when performing rename operations: You cannot rename the root suffix. Subtree rename operations have minimal effect on replication. Replication agreements are applied to an entire database, not to a subtree within the database. Therefore, a subtree rename operation does not require re-configuring a replication agreement. All name changes after a subtree rename operation are replicated as normal. Renaming a subtree might require any synchronization agreements to be reconfigured. Synchronization agreements are set at the suffix or subtree level. Therefore, renaming a subtree can break synchronization. Renaming a subtree requires that any subtree-level access control instructions (ACI) set for the subtree be reconfigured manually, as well as any entry-level ACIs set for child entries of the subtree. Trying to change the component of a subtree, such as moving from ou to dc , might fail with a schema violation. For example, the organizationalUnit object class requires the ou attribute. If that attribute is removed as part of renaming the subtree, the operation fails. If you move a group, the MemberOf plug-in automatically updates the memberOf attributes. However, if you move a subtree that contain groups, you must manually create a task in the cn=memberof task entry or use the dsconf memberof fixup command to update the related memberOf attributes. 1.4.2. Controlling the relative distinguished name behavior when renaming entries When you rename an entry, the deleteOldRDN attribute controls whether the old relative distinguished name (RDN) will be deleted or retained: deleteOldRDN: 0 The existing RDN is retained as a value in the new entry. The resulting entry contains two cn attributes: one with the old and one with the new common name (CN). For example, the following attributes belong to a group that was renamed from cn=old_group,dc=example,dc=com to cn=new_group,dc=example,dc=com with the deleteOldRDN attribute set to 0 : dn: cn=new_group,ou=Groups,dc=example,dc=com objectClass: top objectClass: groupOfUniqueNames cn: old_group cn: new_group deleteOldRDN: 1 Directory Server deletes the old entry and creates a new entry using the new RDN. The new entry only contains the cn attribute of the new entry. For example, the following group was renamed to cn=new_group,dc=example,dc=com with the deleteOldRDN attribute set to 1 : dn: cn=new_group,ou=Groups,dc=example,dc=com objectClass: top objectClass: groupofuniquenames cn: new_group Additional resources Renaming an LDAP entry or subtree 1.4.3. Renaming an LDAP entry or subtree To rename an entry or subtree, use the changetype: modrdn operation, and set the new relative distinguished name (RDN) in the newrdn attribute. For example, to rename the cn=demo1,dc=example,dc=com entry to cn=demo2,dc=example,dc=com , enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=demo1,dc=example,dc=com changetype: modrdn newrdn: cn=demo2 deleteOldRDN: 1 Additional resources Controlling the relative distinguished name behavior when renaming entries 1.4.4. Moving an LDAP entry to a new parent To move an entry to a new parent, use the changetype: modrdn operation, and set the following to attributes: newrdn : Sets the relative distinguished name (RDN) of the moved entry. You must set this entry, even if the RDN remains the same. newSuperior : Sets the distinguished name (DN) of the new parent entry. For example, to move the cn=demo entry from ou=Germany,dc=example,dc=com to ou=France,dc=example,dc=com , enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=demo,ou=Germany,dc=example,dc=com changetype: modrdn newrdn: cn=demo newSuperior: ou=France,dc=example,dc=com deleteOldRDN: 1 Additional resources Controlling the relative distinguished name behavior when renaming entries 1.5. Deleting an LDAP entry using the command line You can remove entries from an LDAP directory, but you can only delete entries that have no child entries. For example, you cannot delete ou=People,dc=example,dc=com , if the uid=user,ou=People,dc=example,dc=com entry still exists. 1.5.1. Deleting an entry using ldapdelete The ldapdelete utility enables you to delete one or multiple entries. For example, to delete the uid=user,ou=People,dc=example,dc=com entry, enter: # ldapdelete -D " cn=Directory Manager " -W -H ldap://server.example.com -x " uid=user,ou=People,dc=example,dc=com " To delete multiple entries in one operation, append them to the command: # ldapdelete -D " cn=Directory Manager " -W -H ldap://server.example.com -x " uid=user1,ou=People,dc=example,dc=com " " uid=user2,ou=People,dc=example,dc=com " Additional resources ldapdelete(1) man page 1.5.2. Deleting an entry using ldapmodify To delete an entry using the ldapmodify utility, use the changetype: delete operation. For example, to delete the uid=user,ou=People,dc=example,dc=com entry, enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: delete 1.6. Using special characters in OpenLDAP client utilities When using the command line, enclose characters that have a special meaning to the command-line interpreter, such as space ( ), asterisk (*), or backslash (\), with quotation marks. Depending on the command-line interpreter, use single or double quotation marks. For example, to authenticate as the cn=Directory Manager user, enclose the user's distinguished name (DN) in quotation marks: # ldapmodify -a -D " cn=Directory Manager " -W -H ldap://server.example.com -x Additionally, if a DN contains a comma in a component, escape it using a backslash. For example, to authenticate as the uid=user,ou=People,dc=example.com Chicago, IL user, enter: # ldapmodify -a -D " cn=uid=user,ou=People,dc=example.com Chicago\, IL " -W -H ldap://server.example.com -x 1.7. Using binary attributes in LDIF statements Certain attributes support binary values, such as the jpegPhoto attribute. When you add or update such an attribute, the utility reads the value for the attribute from a file. To add or update such an attribute, you can use the ldapmodify utility. For example, to add the jpegPhoto attribute to the uid=user,ou=People,dc=example,dc=com entry, and read the value for the attribute from the /home/user_name/photo.jpg file, enter: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: jpegPhoto jpegPhoto:< file: ///home/user_name/photo.jpg Important Note that there is no space between : and < . 1.8. Updating an LDAP entry in an internationalized directory To use attribute values with languages other than English, associate the attribute's value with a language tag. When using ldapmodify to update an attribute that has a language tag set, you must match the value and language tag exactly or the operation will fail. For example, to modify an attribute value that has the lang-fr language tag set, include the tag in the modify operation: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify replace: homePostalAddress; lang-fr homePostalAddress; lang-fr : 34 rue de Seine
[ "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=people,dc=example,dc=com changetype: modify delete: telephoneNumber - add: manager manager: cn=manager_name,ou=people,dc=example,dc=com modifying entry \"uid=user,ou=people,dc=example,dc=com\" ^D", "command_that_outputs_LDIF | ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x", "dn: uid=user,ou=people,dc=example,dc=com changetype: modify delete: telephoneNumber - add: manager manager: cn=manager_name,ou=people,dc=example,dc=com", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -f ~/example.ldif", "ldpamodify -c -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com uid: user givenName: given_name objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetorgperson sn: surname cn: user", "ldapmodify -a -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com uid: user givenName: given_name objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetorgperson sn: surname cn: user", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: dc=example,dc=com changetype: add objectClass: top objectClass: domain dc: example", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: telephoneNumber telephoneNumber: 555-1234567", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: telephoneNumber telephoneNumber: 555-1234567 telephoneNumber: 555-7654321", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify replace: manager manager: uid=manager_name,ou=People,dc=example,dc=com", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: telephoneNumber telephoneNumber: 555-1234567 - add: telephoneNumber telephoneNumber: 555-9876543", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: manager", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: telephoneNumber telephoneNumber: 555-1234567", "The following rename operations exist:", "dn: cn=new_group,ou=Groups,dc=example,dc=com objectClass: top objectClass: groupOfUniqueNames cn: old_group cn: new_group", "dn: cn=new_group,ou=Groups,dc=example,dc=com objectClass: top objectClass: groupofuniquenames cn: new_group", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=demo1,dc=example,dc=com changetype: modrdn newrdn: cn=demo2 deleteOldRDN: 1", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=demo,ou=Germany,dc=example,dc=com changetype: modrdn newrdn: cn=demo newSuperior: ou=France,dc=example,dc=com deleteOldRDN: 1", "ldapdelete -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x \" uid=user,ou=People,dc=example,dc=com \"", "ldapdelete -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x \" uid=user1,ou=People,dc=example,dc=com \" \" uid=user2,ou=People,dc=example,dc=com \"", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: delete", "ldapmodify -a -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x", "ldapmodify -a -D \" cn=uid=user,ou=People,dc=example.com Chicago\\, IL \" -W -H ldap://server.example.com -x", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: jpegPhoto jpegPhoto:< file: ///home/user_name/photo.jpg", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify replace: homePostalAddress; lang-fr homePostalAddress; lang-fr : 34 rue de Seine" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_directory_attributes_and_values/assembly_managing-directory-entries-using-the-command-line_managing-directory-attributes-and-values
4.210. openssl-ibmca
4.210. openssl-ibmca 4.210.1. RHBA-2011:1568 - openssl-ibmca bug fix and enhancement update An updated openssl-ibmca package that fixes several bugs and adds various enhancements is available for Red Hat Enterprise Linux 6. The openssl-ibmca package provides a dynamic OpenSSL engine for the IBM eServer Cryptographic Accelerator (ICA) crypto hardware on IBM eServer zSeries machines. The openssl-ibmca package has been upgraded to upstream version 1.2, which provides a number of bug fixes and enhancements over the version. (BZ# 694194 ) All users of openssl-ibmca are advised to upgrade to this updated package, which fixes these bug and adds these enhancements. 4.210.2. RHBA-2012:0433 - openssl-ibmca bug fix update An updated openssl-ibmca package that fixes one bug is now available for Red Hat Enterprise Linux 6. The openssl-ibmca package provides a dynamic OpenSSL engine for the IBM eServer Cryptographic Accelerator (ICA) crypto hardware on IBM eServer zSeries machines. Bug Fix BZ# 804612 Due to a bug in the ibmca OpenSSL engine code, applications using the OpenSSL library terminated unexpectedly with a segmentation fault when running the ibmca engine with ciphers enabled in output feedback (OFB) mode on IBM System z, z196 series, hardware. A patch has been applied to address this issue, ensuring that the OpenSSL library no longer crashes under these circumstances. All users of openssl-ibmca are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/openssl-ibmca
function::inode_path
function::inode_path Name function::inode_path - get the path to an inode Synopsis Arguments inode Pointer to inode. Description Returns the full path associated with the given inode.
[ "inode_path:string(inode:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-inode-path
Chapter 10. Internationalization
Chapter 10. Internationalization 10.1. Red Hat Enterprise Linux 8 international languages Red Hat Enterprise Linux 8 supports the installation of multiple languages and the changing of languages based on your requirements. East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese. European Languages - English, German, Spanish, French, Italian, Portuguese, and Russian. The following table lists the fonts and input methods provided for various major languages. Language Default Font (Font Package) Input Methods English dejavu-sans-fonts French dejavu-sans-fonts German dejavu-sans-fonts Italian dejavu-sans-fonts Russian dejavu-sans-fonts Spanish dejavu-sans-fonts Portuguese dejavu-sans-fonts Simplified Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libpinyin, libpinyin Traditional Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libzhuyin, libzhuyin Japanese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-kkc, libkkc Korean google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-hangul, libhangul 10.2. Notable changes to internationalization in RHEL 8 RHEL 8 introduces the following changes to internationalization compared to RHEL 7: Support for the Unicode 11 computing industry standard has been added. Internationalization is distributed in multiple packages, which allows for smaller footprint installations. For more information, see Using langpacks . several glibc locales have been synchronized with Unicode Common Locale Data Repository (CLDR).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/internationalization
Chapter 11. Storage
Chapter 11. Storage 11.1. Storage configuration overview You can configure a default storage class, storage profiles, Containerized Data Importer (CDI), data volumes, and automatic boot source updates. 11.1.1. Storage The following storage configuration tasks are mandatory: Configure a default storage class You must configure a default storage class for your cluster. Otherwise, the cluster cannot receive automated boot source updates. Configure storage profiles You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class. The following storage configuration tasks are optional: Reserve additional PVC space for file system overhead By default, 5.5% of a file system PVC is reserved for overhead, reducing the space available for VM disks by that amount. You can configure a different overhead value. Configure local storage by using the hostpath provisioner You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the HPP Operator is automatically installed. Configure user permissions to clone data volumes between namespaces You can configure RBAC roles to enable users to clone data volumes between namespaces. 11.1.2. Containerized Data Importer You can perform the following Containerized Data Importer (CDI) configuration tasks: Override the resource request limits of a namespace You can configure CDI to import, upload, and clone VM disks into namespaces that are subject to CPU and memory resource restrictions. Configure CDI scratch space CDI requires scratch space (temporary storage) to complete some operations, such as importing and uploading VM images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). 11.1.3. Data volumes You can perform the following data volume configuration tasks: Enable preallocation for data volumes CDI can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes. Manage data volume annotations Data volume annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods. 11.1.4. Boot source updates You can perform the following boot source update configuration task: Manage automatic boot source updates Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, CDI imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources. You can enable automatic updates for custom boot sources. 11.2. Configuring storage profiles A storage profile provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class. The Containerized Data Importer (CDI) recognizes a storage provider if it has been configured to identify and interact with the storage provider's capabilities. For recognized storage types, the CDI provides values that optimize the creation of PVCs. You can also configure automatic settings for the storage class by customizing the storage profile. If the CDI does not recognize your storage provider, you must configure storage profiles. Important When using OpenShift Virtualization with Red Hat OpenShift Data Foundation, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs. To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block . 11.2.1. Customizing the storage profile You can specify default parameters by editing the StorageProfile object for the provisioner's storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume object. You cannot modify storage class parameters. To make changes, delete and re-create the storage class. You must then reapply any customizations that were previously made to the storage profile. An empty status section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Interface (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations. Warning If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created. Prerequisites Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail. Procedure Edit the storage profile. In this example, the provisioner is not recognized by CDI. USD oc edit storageprofile <storage_class> Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> Provide the needed attribute values in the storage profile: Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> 1 The accessModes that you select. 2 The volumeMode that you select. After you save your changes, the selected values appear in the storage profile status element. 11.2.1.1. Setting a default cloning strategy using a storage profile You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy . Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance. Cloning strategies can be specified by setting the cloneStrategy attribute in a storage profile to one of these values: snapshot is used by default when snapshots are configured. The CDI will use the snapshot method if it recognizes the storage provider and the provider supports Container Storage Interface (CSI) snapshots. This cloning strategy uses a temporary volume snapshot to clone the volume. copy uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning. csi-clone uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlike snapshot or copy , which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in the StorageProfile object for the provisioner's storage class. Note You can also set clone strategies using the CLI without modifying the default claimPropertySets in your YAML spec section. Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class> 1 Specify the access mode. 2 Specify the volume mode. 3 Specify the default cloning strategy. 11.2.1.2. Viewing automatically created storage profiles The system creates storage profiles for each storage class automatically. Procedure To view the list of storage profiles, run the following command: USD oc get storageprofile To fetch the details of a particular storage profile, run the following command: USD oc describe storageprofile <name> Example storage profile details Name: ocs-storagecluster-ceph-rbd-virtualization Namespace: Labels: app=containerized-data-importer app.kubernetes.io/component=storage app.kubernetes.io/managed-by=cdi-controller app.kubernetes.io/part-of=hyperconverged-cluster app.kubernetes.io/version=4.17.2 cdi.kubevirt.io= Annotations: <none> API Version: cdi.kubevirt.io/v1beta1 Kind: StorageProfile Metadata: Creation Timestamp: 2023-11-13T07:58:02Z Generation: 2 Owner References: API Version: cdi.kubevirt.io/v1beta1 Block Owner Deletion: true Controller: true Kind: CDI Name: cdi-kubevirt-hyperconverged UID: 2d6f169a-382c-4caf-b614-a640f2ef8abb Resource Version: 4186799537 UID: 14aef804-6688-4f2e-986b-0297fd3aaa68 Spec: Status: Claim Property Sets: 1 accessModes: ReadWriteMany volumeMode: Block accessModes: ReadWriteOnce volumeMode: Block accessModes: ReadWriteOnce volumeMode: Filesystem Clone Strategy: csi-clone 2 Data Import Cron Source Format: snapshot 3 Provisioner: openshift-storage.rbd.csi.ceph.com Snapshot Class: ocs-storagecluster-rbdplugin-snapclass Storage Class: ocs-storagecluster-ceph-rbd-virtualization Events: <none> 1 Claim Property Sets is an ordered list of AccessMode / VolumeMode pairs, which describe the PVC modes that are used to provision VM disks. 2 The Clone Strategy line indicates the clone strategy to be used. 3 Data Import Cron Source Format indicates whether golden images on this storage are stored as PVCs or volume snapshots. 11.3. Managing automatic boot source updates You can manage automatic updates for the following boot sources: All Red Hat boot sources All custom boot sources Individual Red Hat or custom boot sources Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources. 11.3.1. Managing Red Hat boot source updates You can opt out of automatic updates for all system-defined boot sources by disabling the enableCommonBootImageImport feature gate. If you disable this feature gate, all DataImportCron objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually. When the enableCommonBootImageImport feature gate is disabled, DataSource objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by creating a new persistent volume claim (PVC) or volume snapshot for the DataSource object, then populating it with an operating system image. 11.3.1.1. Managing automatic updates for all system-defined boot sources Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents CDIDataImportCronOutdated alerts from filling up logs. To disable automatic updates for all system-defined boot sources, turn off the enableCommonBootImageImport feature gate by setting the value to false . Setting this value to true re-enables the feature gate and turns automatic updates back on. Note Custom boot sources are not affected by this setting. Procedure Toggle the feature gate for automatic boot source updates by editing the HyperConverged custom resource (CR). To disable automatic boot source updates, set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged CR to false . For example: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": false}]' To re-enable automatic boot source updates, set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged CR to true . For example: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": true}]' 11.3.2. Managing custom boot source updates Custom boot sources that are not provided by OpenShift Virtualization are not controlled by the feature gate. You must manage them individually by editing the HyperConverged custom resource (CR). Important You must configure a storage class. Otherwise, the cluster cannot receive automated updates for custom boot sources. See Defining a storage class for details. 11.3.2.1. Configuring the default and virt-default storage classes A storage class determines how persistent storage is provisioned for workloads. In OpenShift Virtualization, the virt-default storage class takes precedence over the cluster default storage class and is used specifically for virtualization workloads. Only one storage class should be set as virt-default or cluster default at a time. If multiple storage classes are marked as default, the virt-default storage class overrides the cluster default. To ensure consistent behavior, configure only one storage class as the default for virtualization workloads. Important Boot sources are created using the default storage class. When the default storage class changes, old boot sources are automatically updated using the new default storage class. If your cluster does not have a default storage class, you must define one. If boot source images were stored as volume snapshots and both the cluster default and virt-default storage class have been unset, the volume snapshots are cleaned up and new data volumes will be created. However the newly created data volumes will not start importing until a default storage class is set. Procedure Patch the current virt-default or a cluster default storage class to false: Identify all storage classes currently marked as virt-default by running the following command: USD oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubevirt.io/is-default-virt-class"=="true")|.name' For each storage class returned, remove the virt-default annotation by running the following command: USD oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "false"}}}' Identify all storage classes currently marked as cluster default by running the following command: USD oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubernetes.io/is-default-class"=="true")|.name' For each storage class returned, remove the cluster default annotation by running the following command: USD oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Set a new default storage class: Assign the virt-default role to a storage class by running the following command: USD oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "true"}}}' Alternatively, assign the cluster default role to a storage class by running the following command: USD oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' 11.3.2.2. Configuring a storage class for boot source images You can configure a specific storage class in the HyperConverged resource. Important To ensure stable behavior and avoid unnecessary re-importing, you can specify the storageClassName in the dataImportCronTemplates section of the HyperConverged resource. Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the dataImportCronTemplate to the spec section of the HyperConverged resource and set the storageClassName : apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel9-image-cron spec: template: spec: storage: storageClassName: <storage_class> 1 schedule: "0 */12 * * *" 2 managedDataSource: <data_source> 3 # ... 1 Define the storage class. 2 Required: Schedule for the job specified in cron format. 3 Required: The data source to use. Wait for the HyperConverged Operator (HCO) and Scheduling, Scale, and Performance (SSP) resources to complete reconciliation. Delete any outdated DataVolume and VolumeSnapshot objects from the openshift-virtualization-os-images namespace by running the following command. USD oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron Wait for all DataSource objects to reach a "Ready - True" status. Data sources can reference either a PersistentVolumeClaim (PVC) or a VolumeSnapshot. To check the expected source format, run the following command: USD oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat 11.3.2.3. Enabling automatic updates for custom boot sources OpenShift Virtualization automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the HyperConverged custom resource (CR). Prerequisites The cluster has a default storage class. Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the HyperConverged CR, adding the appropriate template and boot source in the dataImportCronTemplates section. For example: Example custom resource apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos-stream9-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" 1 spec: schedule: "0 */12 * * *" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos-stream9 4 1 This annotation is required for storage classes with volumeBindingMode set to WaitForFirstConsumer . 2 Schedule for the job specified in cron format. 3 Use to create a data volume from a registry source. Use the default pod pullMethod and not node pullMethod , which is based on the node docker cache. The node docker cache is useful when a registry image is available via Container.Image , but the CDI importer is not authorized to access it. 4 For the custom image to be detected as an available boot source, the name of the image's managedDataSource must match the name of the template's DataSource , which is found under spec.dataVolumeTemplates.spec.sourceRef.name in the VM template YAML file. Save the file. 11.3.2.4. Enabling volume snapshot boot sources Enable volume snapshot boot sources by setting the parameter in the StorageProfile associated with the storage class that stores operating system base images. Although DataImportCron was originally designed to maintain only PVC sources, VolumeSnapshot sources scale better than PVC sources for certain storage types. Note Use volume snapshots on a storage profile that is proven to scale better when cloning from a single snapshot. Prerequisites You must have access to a volume snapshot with the operating system image. The storage must support snapshotting. Procedure Open the storage profile object that corresponds to the storage class used to provision boot sources by running the following command: USD oc edit storageprofile <storage_class> Review the dataImportCronSourceFormat specification of the StorageProfile to confirm whether or not the VM is using PVC or volume snapshot by default. Edit the storage profile, if needed, by updating the dataImportCronSourceFormat specification to snapshot . Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: # ... spec: dataImportCronSourceFormat: snapshot Verification Open the storage profile object that corresponds to the storage class used to provision boot sources. USD oc get storageprofile <storage_class> -oyaml Confirm that the dataImportCronSourceFormat specification of the StorageProfile is set to 'snapshot', and that any DataSource objects that the DataImportCron points to now reference volume snapshots. You can now use these boot sources to create virtual machines. 11.3.3. Disabling automatic updates for a single boot source You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Disable automatic updates for an individual boot source by editing the spec.dataImportCronTemplates field. Custom boot source Remove the boot source from the spec.dataImportCronTemplates field. Automatic updates are disabled for custom boot sources by default. System-defined boot source Add the boot source to spec.dataImportCronTemplates . Note Automatic updates are enabled by default for system-defined boot sources, but these boot sources are not listed in the CR unless you add them. Set the value of the dataimportcrontemplate.kubevirt.io/enable annotation to 'false' . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron # ... Save the file. 11.3.4. Verifying the status of a boot source You can determine if a boot source is system-defined or custom by viewing the HyperConverged custom resource (CR). Procedure View the contents of the HyperConverged CR by running the following command: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml Example output apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: # ... status: # ... dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos-9-image-cron spec: garbageCollect: Outdated managedDataSource: centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 # ... - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: {} 2 # ... 1 Indicates a system-defined boot source. 2 Indicates a custom boot source. Verify the status of the boot source by reviewing the status.dataImportCronTemplates.status field. If the field contains commonTemplate: true , it is a system-defined boot source. If the status.dataImportCronTemplates.status field has the value {} , it is a custom boot source. 11.4. Reserving PVC space for file system overhead When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem volume mode, you must ensure that there is enough space on the PVC for the VM disk and for file system overhead, such as metadata. By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount. You can configure a different overhead value by editing the HCO object. You can change the value globally and you can specify values for specific storage classes. 11.4.1. Overriding the default file system overhead value Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the spec.filesystemOverhead attribute of the HCO object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Open the HCO object for editing by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the spec.filesystemOverhead fields, populating them with your chosen values: # ... spec: filesystemOverhead: global: "<new_global_value>" 1 storageClass: <storage_class_name>: "<new_value_for_this_storage_class>" 2 1 The default file system overhead percentage used for any storage classes that do not already have a set value. For example, global: "0.07" reserves 7% of the PVC for file system overhead. 2 The file system overhead percentage for the specified storage class. For example, mystorageclass: "0.04" changes the default overhead value for PVCs in the mystorageclass storage class to 4%. Save and exit the editor to update the HCO object. Verification View the CDIConfig status and verify your changes by running one of the following commands: To generally verify changes to CDIConfig : USD oc get cdiconfig -o yaml To view your specific changes to CDIConfig : USD oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}' 11.5. Configuring local storage by using the hostpath provisioner You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the Hostpath Provisioner Operator is automatically installed. HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use HPP, you create an HPP custom resource (CR) with a basic storage pool. 11.5.1. Creating a hostpath provisioner with a basic storage pool You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a storagePools stanza. The storage pool specifies the name and path used by the CSI driver. Important Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. Prerequisites The directories specified in spec.storagePools.path must have read/write access. Procedure Create an hpp_cr.yaml file with a storagePools stanza as in the following example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: "/var/myvolumes" 2 workload: nodeSelector: kubernetes.io/os: linux 1 The storagePools stanza is an array to which you can add multiple entries. 2 Specify the storage pool directories under this node path. Save the file and exit. Create the HPP by running the following command: USD oc create -f hpp_cr.yaml 11.5.1.1. About creating storage classes When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools stanza. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a pod is created using the PVC. 11.5.1.2. Creating a storage class for the CSI driver with the storagePools stanza To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver. When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a pod is created using the PVC. Procedure Create a storageclass_csi.yaml file to define the storage class: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3 1 The two possible reclaimPolicy values are Delete and Retain . If you do not specify a value, the default value is Delete . 2 The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. 3 Specify the name of the storage pool defined in the HPP CR. Save the file and exit. Create the StorageClass object by running the following command: USD oc create -f storageclass_csi.yaml 11.5.2. About storage pools created with PVC templates If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR). A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation. The PVC template is based on the spec stanza of the PersistentVolumeClaim object: Example PersistentVolumeClaim object apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 1 This value is only required for block volume mode PVs. You define a storage pool using a pvcTemplate specification in the HPP CR. The Operator creates a PVC from the pvcTemplate specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes. You can combine basic storage pools with storage pools created from PVC templates. 11.5.2.1. Creating a storage pool with a PVC template You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR). Important Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. Prerequisites The directories specified in spec.storagePools.path must have read/write access. Procedure Create an hpp_pvc_template_pool.yaml file for the HPP CR that specifies a persistent volume (PVC) template in the storagePools stanza according to the following example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: "/var/myvolumes" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux 1 The storagePools stanza is an array that can contain both basic and PVC template storage pools. 2 Specify the storage pool directories under this node path. 3 Optional: The volumeMode parameter can be either Block or Filesystem as long as it matches the provisioned volume format. If no value is specified, the default is Filesystem . If the volumeMode is Block , the mounting pod creates an XFS file system on the block volume before mounting it. 4 If the storageClassName parameter is omitted, the default storage class is used to create PVCs. If you omit storageClassName , ensure that the HPP storage class is not the default storage class. 5 You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request. Save the file and exit. Create the HPP with a storage pool by running the following command: USD oc create -f hpp_pvc_template_pool.yaml 11.6. Enabling user permissions to clone data volumes across namespaces The isolating nature of namespaces means that users cannot by default clone resources between namespaces. To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace. 11.6.1. Creating RBAC resources for cloning data volumes Create a new cluster role that enables permissions for all actions for the datavolumes resource. Prerequisites You must have cluster admin privileges. Procedure Create a ClusterRole manifest: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"] 1 Unique name for the cluster role. Create the cluster role in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the ClusterRole manifest created in the step. Create a RoleBinding manifest that applies to both the source and destination namespaces and references the cluster role created in the step. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io 1 Unique name for the role binding. 2 The namespace for the source data volume. 3 The namespace to which the data volume is cloned. 4 The name of the cluster role created in the step. Create the role binding in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the RoleBinding manifest created in the step. 11.7. Configuring CDI to override CPU and memory quotas You can configure the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions. 11.7.1. About CPU and memory quotas in a namespace A resource quota , defined by the ResourceQuota object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace. The HyperConverged custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0 . This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota. When the AutoResourceLimits feature gate is enabled, OpenShift Virtualization automatically manages CPU and memory limits. If a namespace has both CPU and memory quotas, the memory limit is set to double the base allocation and the CPU limit is one per vCPU. 11.7.2. Overriding CPU and memory defaults Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads stanza to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the spec.resourceRequirements.storageWorkloads stanza to the CR, setting the values based on your use case. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi" Save and exit the editor to update the HyperConverged CR. 11.7.3. Additional resources Resource quotas per project 11.8. Preparing CDI scratch space 11.8.1. About scratch space The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts. You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass field of the HyperConverged custom resource. If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used. Note CDI requires requesting scratch space with a file volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block volume mode, you must define a storage class capable of provisioning file volume mode PVCs. Manual provisioning If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod. 11.8.2. CDI operations that require scratch space Type Reason Registry imports CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. Upload image QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. HTTP imports of archived images QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. HTTP imports of authenticated images QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. HTTP imports of custom certificates QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. 11.8.3. Defining a storage class You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass field to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the spec.scratchSpaceStorageClass field to the CR, setting the value to the name of a storage class that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>" 1 1 If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated. Save and exit your default editor to update the HyperConverged CR. 11.8.4. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [βœ“] QCOW2 [βœ“] GZ* [βœ“] XZ* [βœ“] QCOW2** [βœ“] GZ* [βœ“] XZ* [βœ“] QCOW2 [βœ“] GZ* [βœ“] XZ* [βœ“] QCOW2* β–‘ GZ β–‘ XZ [βœ“] QCOW2* [βœ“] GZ* [βœ“] XZ* KubeVirt (RAW) [βœ“] RAW [βœ“] GZ [βœ“] XZ [βœ“] RAW [βœ“] GZ [βœ“] XZ [βœ“] RAW [βœ“] GZ [βœ“] XZ [βœ“] RAW* β–‘ GZ β–‘ XZ [βœ“] RAW* [βœ“] GZ* [βœ“] XZ* [βœ“] Supported operation β–‘ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 11.8.5. Additional resources Dynamic provisioning 11.9. Using preallocation for data volumes The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes. 11.9.1. About preallocation The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes. If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type: fallocate If the file system supports it, CDI uses the operating system's fallocate call to preallocate space by using the posix_fallocate function, which allocates blocks and marks them as uninitialized. full If fallocate mode cannot be used, full mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed. 11.9.2. Enabling preallocation for a data volume You can enable preallocation for specific data volumes by including the spec.preallocation field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI ( oc ). Preallocation mode is supported for all CDI source types. Procedure Specify the spec.preallocation field in the data volume manifest: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true # ... 1 All CDI source types support preallocation. However, preallocation is ignored for cloning operations. 2 Specify the URL of the data source in your registry. 11.10. Managing data volume annotations Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods. 11.10.1. Example: Data volume annotations This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network annotation causes the pod to use the multus network named bridge-network as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name> annotation. Multus network annotation example apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: datavolume-example annotations: v1.multus-cni.io/default-network: bridge-network 1 # ... 1 Multus network annotation 11.11. Understanding virtual machine storage with the CSI paradigm Virtual machines (VMs) in OpenShift Virtualization use PersistentVolume (PV) and PersistentVolumeClaim (PVC) paradigms to manage storage. This ensures seamless integration with the Container Storage Interface (CSI). 11.11.1. Virtual machine CSI storage overview OpenShift Virtualization integrates with the Container Storage Interface (CSI) to manage VM storage. Storage classes define storage capabilities such as performance tiers and types. PersistentVolumeClaims (PVCs) request storage resources, which bind to PersistentVolumes (PVs). CSI drivers connect Kubernetes to vendor storage backends, including iSCSI, NFS, and Fibre Channel.
[ "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>", "oc get storageprofile", "oc describe storageprofile <name>", "Name: ocs-storagecluster-ceph-rbd-virtualization Namespace: Labels: app=containerized-data-importer app.kubernetes.io/component=storage app.kubernetes.io/managed-by=cdi-controller app.kubernetes.io/part-of=hyperconverged-cluster app.kubernetes.io/version=4.17.2 cdi.kubevirt.io= Annotations: <none> API Version: cdi.kubevirt.io/v1beta1 Kind: StorageProfile Metadata: Creation Timestamp: 2023-11-13T07:58:02Z Generation: 2 Owner References: API Version: cdi.kubevirt.io/v1beta1 Block Owner Deletion: true Controller: true Kind: CDI Name: cdi-kubevirt-hyperconverged UID: 2d6f169a-382c-4caf-b614-a640f2ef8abb Resource Version: 4186799537 UID: 14aef804-6688-4f2e-986b-0297fd3aaa68 Spec: Status: Claim Property Sets: 1 accessModes: ReadWriteMany volumeMode: Block accessModes: ReadWriteOnce volumeMode: Block accessModes: ReadWriteOnce volumeMode: Filesystem Clone Strategy: csi-clone 2 Data Import Cron Source Format: snapshot 3 Provisioner: openshift-storage.rbd.csi.ceph.com Snapshot Class: ocs-storagecluster-rbdplugin-snapclass Storage Class: ocs-storagecluster-ceph-rbd-virtualization Events: <none>", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'", "oc get sc -o json| jq '.items[].metadata|select(.annotations.\"storageclass.kubevirt.io/is-default-virt-class\"==\"true\")|.name'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubevirt.io/is-default-virt-class\": \"false\"}}}'", "oc get sc -o json| jq '.items[].metadata|select(.annotations.\"storageclass.kubernetes.io/is-default-class\"==\"true\")|.name'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubevirt.io/is-default-virt-class\": \"true\"}}}'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel9-image-cron spec: template: spec: storage: storageClassName: <storage_class> 1 schedule: \"0 */12 * * *\" 2 managedDataSource: <data_source> 3", "For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.", "oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron", "oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos-stream9-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos-stream9 4", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: spec: dataImportCronSourceFormat: snapshot", "oc get storageprofile <storage_class> -oyaml", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: status: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: centos-9-image-cron spec: garbageCollect: Outdated managedDataSource: centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: {} 2", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2", "oc get cdiconfig -o yaml", "oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_cr.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_pvc_template_pool.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io", "oc create -f <datavolume-cloner.yaml> 1", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: datavolume-example annotations: v1.multus-cni.io/default-network: bridge-network 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/virtualization/storage
Chapter 13. Volume Snapshots
Chapter 13. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 13.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 13.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 13.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-snapshots_osp
7.4. Run Red Hat JBoss Data Grid with a Custom Configuration
7.4. Run Red Hat JBoss Data Grid with a Custom Configuration To run Red Hat JBoss Data Grid with a custom configuration, add a configuration file to the USDJDG_HOME/standalone/configuration directory. Use the following command to specify the created custom configuration file for standalone mode: Use the following command to specify the created custom configuration file for clustered mode: The -c used for this script does not allow absolute paths, therefore the specified file must be available in the USDJDG_HOME/standalone/configuration directory. If the command is run without the -c parameter, JBoss Data Grid uses the default configuration. Report a bug
[ "USDJDG_HOME/bin/standalone.sh -c USD{FILENAME}", "USDJDG_HOME/bin/clustered.sh -c USD{FILENAME}" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/run_red_hat_jboss_data_grid_with_a_custom_configuration
Release Notes for AMQ Streams 2.5 on RHEL
Release Notes for AMQ Streams 2.5 on RHEL Red Hat Streams for Apache Kafka 2.5 Highlights of what's new and what's changed with this release of AMQ Streams on Red Hat Enterprise Linux
[ "strimzi.authorization.grants.max.idle.time.seconds=\"300\" strimzi.authorization.grants.gc.period.seconds=\"300\" strimzi.authorization.reuse.grants=\"false\"", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; # oauth.username.claim=\"['user.info'].['user.id']\" \\ 1 oauth.fallback.username.claim=\"['client.info'].['client.id']\" \\ 2 #", "client.quota.callback.class= io.strimzi.kafka.quotas.StaticQuotaCallback client.quota.callback.static.produce= 1000000 client.quota.callback.static.fetch= 1000000 client.quota.callback.static.storage.soft= 400000000000 client.quota.callback.static.storage.hard= 500000000000 client.quota.callback.static.storage.check-interval= 5" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html-single/release_notes_for_amq_streams_2.5_on_rhel/%7Bsupported-configurations%7D
Chapter 12. Replacing a failed disk
Chapter 12. Replacing a failed disk If one of the disks fails in your Ceph cluster, complete the following procedures to replace it: Determining if there is a device name change, see Section 12.1, "Determining if there is a device name change" . Ensuring that the OSD is down and destroyed, see Section 12.2, "Ensuring that the OSD is down and destroyed" . Removing the old disk from the system and installing the replacement disk, see Section 12.3, "Removing the old disk from the system and installing the replacement disk" . Verifying that the disk replacement is successful, see Section 12.4, "Verifying that the disk replacement is successful" . 12.1. Determining if there is a device name change Before you replace the disk, determine if the replacement disk for the replacement OSD has a different name in the operating system than the device that you want to replace. If the replacement disk has a different name, you must update Ansible parameters for the devices list so that subsequent runs of ceph-ansible , including when director runs ceph-ansible , do not fail as a result of the change. For an example of the devices list that you must change when you use director, see Section 5.3, "Mapping the Ceph Storage node disk layout" . Warning If the device name changes and you use the following procedures to update your system outside of ceph-ansible or director, there is a risk that the configuration management tools are out of sync with the system that they manage until you update the system definition files and the configuration is reasserted without error. Persistent naming of storage devices Storage devices that the sd driver manages might not always have the same name across reboots. For example, a disk that is normally identified by /dev/sdc might be named /dev/sdb . It is also possible for the replacement disk, /dev/sdc , to appear in the operating system as /dev/sdd even if you want to use it as a replacement for /dev/sdc . To address this issue, use names that are persistent and match the following pattern: /dev/disk/by-* . For more information, see Persistent Naming in the Red Hat Enterprise Linux (RHEL) 7 Storage Administration Guide . Depending on the naming method that you use to deploy Ceph, you might need to update the devices list after you replace the OSD. Use the following list of naming methods to determine if you must change the devices list: The major and minor number range method If you used sd and want to continue to use it, after you install the new disk, check if the name has changed. If the name did not change, for example, if the same name appears correctly as /dev/sdd , it is not necessary to change the name after you complete the disk replacement procedures. Important This naming method is not recommended because there is still a risk that the name becomes inconsistent over time. For more information, see Persistent Naming in the RHEL 7 Storage Administration Guide . The by-path method If you use this method, and you add a replacement disk in the same slot, then the path is consistent and no change is necessary. Important Although this naming method is preferable to the major and minor number range method, use caution to ensure that the target numbers do not change. For example, use persistent binding and update the names if a host adapter is moved to a different PCI slot. In addition, there is the possibility that the SCSI host numbers can change if a HBA fails to probe, if drivers are loaded in a different order, or if a new HBA is installed on the system. The by-path naming method also differs between RHEL7 and RHEL8. For more information, see: Article [What is the difference between "by-path" links created in RHEL8 and RHEL7?] https://access.redhat.com/solutions/5171991 Overview of persistent naming attributes in the RHEL 8 Managing file systems guide. The by-uuid method If you use this method, you can use the blkid utility to set the new disk to have the same UUID as the old disk. For more information, see Persistent Naming in the RHEL 7 Storage Administration Guide . The by-id method If you use this method, you must change the devices list because this identifier is a property of the device and the device has been replaced. When you add the new disk to the system, if it is possible to modify the persistent naming attributes according to the RHEL7 Storage Administrator Guide , see Persistent Naming , so that the device name is unchanged, then it is not necessary to update the devices list and re-run ceph-ansible , or trigger director to re-run ceph-ansible and you can proceed with the disk replacement procedures. However, you can re-run ceph-ansible to ensure that the change did not result in any inconsistencies. Warning Confirm that the replacement disk is the same size as the original disk to ensure consistent Red Hat Ceph Storage performance. If a disk of the same size is not available, contact Red Hat Ceph Storage support before continuing with disk replacement. 12.2. Ensuring that the OSD is down and destroyed On the server that hosts the Ceph Monitor, use the ceph command in the running monitor container to ensure that the OSD that you want to replace is down, and then destroy it. Procedure Identify the name of the running Ceph monitor container and store it in an environment variable called MON : Alias the ceph command so that it executes within the running Ceph monitor container: Use the new alias to verify that the OSD that you want to replace is down: Destroy the OSD. The following example command destroys OSD 27 : 12.3. Removing the old disk from the system and installing the replacement disk On the container host with the OSD that you want to replace, remove the old disk from the system and install the replacement disk. Prerequisites: Verify that the device ID has changed. For more information, see Section 12.1, "Determining if there is a device name change" . The ceph-volume command is present in the Ceph container but is not installed on the overcloud node. Create an alias so that the ceph-volume command runs the ceph-volume binary inside the Ceph container. Then use the ceph-volume command to clean the new disk and add it as an OSD. Procedure Ensure that the failed OSD is not running: Identify the image ID of the ceph container image and store it in an environment variable called IMG : Alias the ceph-volume command so that it runs inside the USDIMG Ceph container, with the ceph-volume entry point and relevant directories: Verify that the aliased command runs successfully: Check that your new OSD device is not already part of LVM. Use the pvdisplay command to inspect the device, and ensure that the VG Name field is empty. Replace <NEW_DEVICE> with the /dev/* path of your new OSD device: If the VG Name field is not empty, then the device belongs to a volume group that you must remove. If the device belongs to a volume group, use the lvdisplay command to check if there is a logical volume in the volume group. Replace <VOLUME_GROUP> with the value of the VG Name field that you retrieved from the pvdisplay command: If the LV Path field is not empty, then the device contains a logical volume that you must remove. If the new device is part of a logical volume or volume group, remove the logical volume, volume group, and the device association as a physical volume within the LVM system. Replace <LV_PATH> with the value of the LV Path field. Replace <VOLUME_GROUP> with the value of the VG Name field. Replace <NEW_DEVICE> with the /dev/* path of your new OSD device. Ensure that the new OSD device is clean. In the following example, the device is /dev/sdj : Create the new OSD with the existing OSD ID by using the new device but pass --no-systemd so that ceph-volume does not attempt to start the OSD. This is not possible from within the container: Important If you deployed Ceph with custom parameters, such as a separate block.db , ensure that you use the custom parameters when you replace the OSD. Start the OSD outside of the container: 12.4. Verifying that the disk replacement is successful To check that your disk replacement is successful, on the undercloud, complete the following steps. Procedure Check if the device name changed, update the devices list according to the naming method you used to deploy Ceph. For more information, see Section 12.1, "Determining if there is a device name change" . To ensure that the change did not introduce any inconsistencies, re-run the overcloud deploy command to perform a stack update. In cases where you have hosts that have different device lists, you might have to define an exception. For example, you might use the following example heat environment file to deploy a node with three OSD devices. The CephAnsibleDisksConfig parameter applies to all nodes that host OSDs, so you cannot update the devices parameter with the new device list. Instead, you must define an exception for the new host that has a different device list. For more information about defining an exception, see Section 5.5, "Overriding parameters for dissimilar Ceph Storage nodes" and Section 5.5.1.2, "Altering the disk layout in Ceph Storage nodes" .
[ "MON=USD(podman ps | grep ceph-mon | awk {'print USD1'})", "alias ceph=\"podman exec USDMON ceph\"", "ceph osd tree | grep 27 27 hdd 0.04790 osd.27 down 1.00000 1.00000", "ceph osd destroy 27 --yes-i-really-mean-it destroyed osd.27", "systemctl stop ceph-osd@27", "IMG=USD(podman images | grep ceph | awk {'print USD3'})", "alias ceph-volume=\"podman run --rm --privileged --net=host --ipc=host -v /run/lock/lvm:/run/lock/lvm:z -v /var/run/udev/:/var/run/udev/:z -v /dev:/dev -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --entrypoint=ceph-volume USDIMG --cluster ceph\"", "ceph-volume lvm list", "pvdisplay <NEW_DEVICE> --- Physical volume --- PV Name /dev/sdj VG Name ceph-0fb0de13-fc8e-44c8-99ea-911e343191d2 PV Size 50.00 GiB / not usable 1.00 GiB Allocatable yes (but full) PE Size 1.00 GiB Total PE 49 Free PE 0 Allocated PE 49 PV UUID kOO0If-ge2F-UH44-6S1z-9tAv-7ypT-7by4cp", "lvdisplay | grep <VOLUME_GROUP> LV Path /dev/ceph-0fb0de13-fc8e-44c8-99ea-911e343191d2/osd-data-a0810722-7673-43c7-8511-2fd9db1dbbc6 VG Name ceph-0fb0de13-fc8e-44c8-99ea-911e343191d2", "lvremove --force <LV_PATH> Logical volume \"osd-data-a0810722-7673-43c7-8511-2fd9db1dbbc6\" successfully removed", "vgremove --force <VOLUME_GROUP> Volume group \"ceph-0fb0de13-fc8e-44c8-99ea-911e343191d2\" successfully removed", "pvremove <NEW_DEVICE> Labels on physical volume \"/dev/sdj\" successfully wiped.", "ceph-volume lvm zap /dev/sdj --> Zapping: /dev/sdj --> --destroy was not specified, but zapping a whole device will remove the partition table Running command: /usr/sbin/wipefs --all /dev/sdj Running command: /bin/dd if=/dev/zero of=/dev/sdj bs=1M count=10 stderr: 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.010618 s, 988 MB/s --> Zapping successful for: <Raw Device: /dev/sdj>", "ceph-volume lvm create --osd-id 27 --data /dev/sdj --no-systemd", "systemctl start ceph-osd@27", "parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_an_overcloud_with_containerized_red_hat_ceph/replacing_a_failed_disk
Working with model registries
Working with model registries Red Hat OpenShift AI Cloud Service 1 Working with model registries in Red Hat OpenShift AI Cloud Service
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_model_registries/index
Chapter 3. Storage
Chapter 3. Storage LIO kernel Target Subsystem Red Hat Enterprise Linux 7 uses the LIO kernel target subsystem, which is the standard open source SCSI target for block storage, for all of the following storage fabrics: FCoE, iSCSI, iSER (Mellanox InfiniBand), and SRP (Mellanox InfiniBand). Red Hat Enterprise Linux 6 uses tgtd , the SCSI Target Daemon, for iSCSI target support, and only uses LIO, the Linux kernel target, for Fibre-Channel over Ethernet (FCoE) targets via the fcoe-target-utils package. The targetcli shell provides the general management platform for the LIO Linux SCSI target. LVM Cache Red Hat Enterprise Linux 7 introduces LVM cache as a Technology Preview. This feature allows users to create logical volumes with a small fast device performing as a cache to larger slower devices. Please refer to the lvm(8) manual page for information on creating cache logical volumes. Note that the following commands are not currently allowed on cache logical volumes: pvmove : will skip over any cache logical volume; lvresize , lvreduce , lvextend : cache logical volumes cannot be resized currently; vgsplit : splitting a volume group is not allowed when cache logical volumes exist in it. Storage Array Management with libStorageMgmt API Red Hat Enterprise Linux 7 introduces storage array management as a Technology Preview. libStorageMgmt is a storage array independent Application Programming Interface (API). It provides a stable and consistent API that allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use it as a tool to manually configure storage and to automate storage management tasks with the included Command Line Interface (CLI). Support for LSI Syncro Red Hat Enterprise Linux 7 includes code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter will be provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7 are encouraged to provide feedback to Red Hat and LSI. For more information on LSI Syncro CS solutions, please visit http://www.lsi.com/products/shared-das/pages/default.aspx . LVM Application Programming Interface Red Hat Enterprise Linux 7 features the new LVM application programming interface (API) as a Technology Preview. This API is used to query and control certain aspects of LVM. Refer to the lvm2app.h header file for more information. DIF/DIX Support DIF/DIX is a new addition to the SCSI Standard and a Technology Preview in Red Hat Enterprise Linux 7. DIF/DIX increases the size of the commonly used 512-byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receive, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA. For more information, refer to the section Block Devices with DIF/DIX Enabled in the Storage Administration Guide . Support of Parallel NFS Parallel NFS (pNFS) is a part of the NFS v4.1 standard that allows clients to access storage devices directly and in parallel. The pNFS architecture can improve the scalability and performance of NFS servers for several common workloads. pNFS defines three different storage protocols or layouts: files, objects, and blocks. The Red Hat Enterprise Linux 7 client fully supports the files layout, and the blocks and object layouts are supported as a Technology Preview. Red Hat continues to work with partners and open source projects to qualify new pNFS layout types and to provide full support for more layout types in the future. For more information on pNFS, refer to http://www.pnfs.com/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-Red_Hat_Enterprise_Linux-7.0_Release_Notes-Storage
Chapter 33. InlineLogging schema reference
Chapter 33. InlineLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the InlineLogging type from ExternalLogging . It must have the value inline for the type InlineLogging . Property Description type Must be inline . string loggers A Map from logger name to logger level. map
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-InlineLogging-reference
Chapter 28. OpenShift
Chapter 28. OpenShift The namespace for openshift-logging specific metadata Data type group 28.1. openshift.labels Labels added by the Cluster Log Forwarder configuration Data type group
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/openshift
probe::ipmib.InReceives
probe::ipmib.InReceives Name probe::ipmib.InReceives - Count an arriving packet Synopsis ipmib.InReceives Values skb pointer to the struct sk_buff being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global InReceives (equivalent to SNMP's MIB IPSTATS_MIB_INRECEIVES)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-inreceives
Chapter 1. Overview
Chapter 1. Overview The Storage Administration Guide contains extensive information on supported file systems and data storage features in Red Hat Enterprise Linux 6. This book is intended as a quick reference for administrators managing single-node (that is, non-clustered) storage solutions. The Storage Administration Guide is split into two parts: File Systems, and Storage Administration. The File Systems part details the various file systems Red Hat Enterprise Linux 6 supports. It describes them and explains how best to utilize them. The Storage Administration part details the various tools and storage administration tasks Red Hat Enterprise Linux 6 supports. It describes them and explains how best to utilize them. 1.1. What's New in Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 6 features the following file system enhancements: File System Encryption (Technology Preview) It is now possible to encrypt a file system at mount using eCryptfs [1] , providing an encryption layer on top of an actual file system. This "pseudo-file system" allows per-file and file name encryption, which offers more granular encryption than encrypted block devices. For more information about file system encryption, refer to Chapter 3, Encrypted File System . File System Caching (Technology Preview) FS-Cache [1] allows the use of local storage for caching data from file systems served over the network (for example, through NFS). This helps minimize network traffic, although it does not guarantee faster access to data over the network. FS-Cache allows a file system on a server to interact directly with a client's local cache without creating an overmounted file system. For more information about FS-Cache, refer to Chapter 10, FS-Cache . Btrfs (Technology Preview) Btrfs [1] is a local file system that is now available. It aims to provide better performance and scalability, including integrated LVM operations. For more information on Btrfs, refer to Chapter 4, Btrfs . I/O Limit Processing The Linux I/O stack can now process I/O limit information for devices that provide it. This allows storage management tools to better optimize I/O for some devices. For more information on this, refer to Chapter 23, Storage I/O Alignment and Size . ext4 Support The ext4 file system is fully supported in this release. It is now the default file system of Red Hat Enterprise Linux 6, supporting an unlimited number of subdirectories. It also features more granular timestamping, extended attributes support, and quota journaling. For more information on ext4, refer to Chapter 6, The Ext4 File System . Network Block Storage Fibre-channel over Ethernet is now supported. This allows a fibre-channel interface to use 10-Gigabit Ethernet networks while preserving the fibre-channel protocol. For instructions on how to set this up, refer to Chapter 32, Configuring a Fibre-Channel Over Ethernet Interface . [1] This feature is being provided in this release as a technology preview . Technology Preview features are currently not supported under Red Hat Enterprise Linux subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure. You are free to provide feedback and functionality suggestions for a technology preview feature before it becomes fully supported. Erratas will be provided for high-severity security issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/part-overvw
Jenkins
Jenkins OpenShift Container Platform 4.13 Jenkins Red Hat OpenShift Documentation Team
[ "podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>", "oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8", "oc describe serviceaccount jenkins", "Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp", "oc describe secret <secret name from above>", "Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA", "pluginId:pluginVersion", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "oc new-app jenkins-persistent", "oc new-app jenkins-ephemeral", "oc describe jenkins-ephemeral", "kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange", "def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }", "docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>", "docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>", "podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }", "pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)", "steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh", "steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")", "#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json", "oc import-image jenkins-agent-nodejs -n openshift", "oc import-image jenkins-agent-maven -n openshift", "oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/jenkins/index
Installing and Configuring Red Hat Discovery
Installing and Configuring Red Hat Discovery Subscription Central 1-latest Installing Red Hat Discovery Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/installing_and_configuring_red_hat_discovery/index
Operators
Operators OpenShift Container Platform 4.16 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operators/index
Chapter 7. Management of monitoring stack using the Ceph Orchestrator
Chapter 7. Management of monitoring stack using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to deploy monitoring and alerting stack. The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, and Grafana. Users need to either define these services with Cephadm in a YAML configuration file, or they can use the command line interface to deploy them. When multiple services of the same type are deployed, a highly-available setup is deployed. The node exporter is an exception to this rule. Note Red Hat Ceph Storage 5.0 does not support custom images for deploying monitoring services such as Prometheus, Grafana, Alertmanager, and node-exporter. The following monitoring services can be deployed with Cephadm: Prometheus is the monitoring and alerting toolkit. It collects the data provided by Prometheus exporters and fires preconfigured alerts if predefined thresholds have been reached. The Prometheus manager module provides a Prometheus exporter to pass on Ceph performance counters from the collection point in ceph-mgr . The Prometheus configuration, including scrape targets, such as metrics providing daemons, is set up automatically by Cephadm. Cephadm also deploys a list of default alerts, for example, health error, 10% OSDs down, or pgs inactive. Alertmanager handles alerts sent by the Prometheus server. It deduplicates, groups, and routes the alerts to the correct receiver. By default, the Ceph dashboard is automatically configured as the receiver. The Alertmanager handles alerts sent by the Prometheus server. Alerts can be silenced using the Alertmanager, but silences can also be managed using the Ceph Dashboard. Grafana is the visualization and alerting software. The alerting functionality of Grafana is not used by this monitoring stack. For alerting, the Alertmanager is used. By default, traffic to Grafana is encrypted with TLS. You can either supply your own TLS certificate or use a self-signed one. If no custom certificate has been configured before Grafana has been deployed, then a self-signed certificate is automatically created and configured for Grafana. Custom certificates for Grafana can be configured using the following commands: Syntax Node exporter is an exporter for Prometheus which provides data about the node on which it is installed. It is recommended to install the node exporter on all nodes. This can be done using the monitoring.yml file with the node-exporter service type. 7.1. Deploying the monitoring stack using the Ceph Orchestrator The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, Grafana, and Ceph Exporter. Ceph Dashboard makes use of these components to store and visualize detailed metrics on cluster usage and performance. You can deploy the monitoring stack using the service specification in YAML file format. All the monitoring services can have the network and port they bind to configured in the yml file. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Procedure Enable the prometheus module in the Ceph Manager daemon. This exposes the internal Ceph metrics so that Prometheus can read them: Example Important Ensure this command is run before Prometheus is deployed. If the command was not run before the deployment, you must redeploy Prometheus to update the configuration: Navigate to the following directory: Syntax Example Note If the directory monitoring does not exist, create it. Create the monitoring.yml file: Example Edit the specification file with a content similar to the following example: Example Note Ensure the monitoring stack components alertmanager , prometheus , and grafana are deployed on the same host. The node-exporter and ceph-exporter components should be deployed on all the hosts. Apply monitoring services: Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Important Prometheus, Grafana, and the Ceph dashboard are all automatically configured to talk to each other, resulting in a fully functional Grafana integration in the Ceph dashboard. 7.2. Removing the monitoring stack using the Ceph Orchestrator You can remove the monitoring stack using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Log into the Cephadm shell: Example Use the ceph orch rm command to remove the monitoring stack: Syntax Example Check the status of the process: Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
[ "ceph config-key set mgr/cephadm/grafana_key -i PRESENT_WORKING_DIRECTORY /key.pem ceph config-key set mgr/cephadm/grafana_crt -i PRESENT_WORKING_DIRECTORY /certificate.pem", "ceph mgr module enable prometheus", "ceph orch redeploy prometheus", "cd /var/lib/ceph/ DAEMON_PATH /", "cd /var/lib/ceph/monitoring/", "touch monitoring.yml", "service_type: prometheus service_name: prometheus placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: node-exporter --- service_type: alertmanager service_name: alertmanager placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: grafana service_name: grafana placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: ceph-exporter", "ceph orch apply -i monitoring.yml", "ceph orch ls", "ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --service_name=prometheus", "cephadm shell", "ceph orch rm SERVICE_NAME --force", "ceph orch rm grafana ceph orch rm prometheus ceph orch rm node-exporter ceph orch rm alertmanager ceph orch rm ceph-exporter ceph mgr module disable prometheus", "ceph orch status", "ceph orch ls", "ceph orch ps", "ceph orch ps" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/management-of-monitoring-stack-using-the-ceph-orchestrator
Chapter 58. Security
Chapter 58. Security OpenSCAP rpmverifypackage does not work correctly The chdir and chroot system calls are called twice by the rpmverifypackage probe. Consequently, an error occurs when the probe is utilized during an OpenSCAP scan with custom Open Vulnerability and Assessment Language (OVAL) content. To work around this problem, do not use the rpmverifypackage_test OVAL test in your content or use only the content from the scap-security-guide package where rpmverifypackage_test is not used. (BZ# 1603347 ) dconf databases are not checked by OVAL OVAL (Open Vulnerability and Assessment Language) checks used in the SCAP Security Guide project are not able to read a dconf binary database, only files used to generate the database. The database is not regenerated automatically, the administrator needs to enter the dconf update command. As a consequence, changes to the database that are not made using files in the /etc/dconf/db/ directory cannot be detected by scanning. This may cause false negatives results. To work around this problem, run dconf update periodically, for example, using the /etc/crontab configuration file. (BZ# 1631378 ) SCAP Workbench fails to generate results-based remediations from tailored profiles The following error occurs when trying to generate results-based remediation roles from a customized profile using the the SCAP Workbench tool: To work around this problem, use the oscap command with the --tailoring-file option. (BZ# 1533108 ) OpenSCAP scanner results contain a lot of SELinux context error messages The OpenSCAP scanner logs inability to get SELinux context on the ERROR level even in situations where it is not a true error. As a result, OpenSCAP scanner results contain a lot of SELinux context error messages. Both the oscap command-line utility and the SCAP Workbench graphical utility outputs can be hard to read for that reason. (BZ# 1640522 ) oscap scans use an excessive amount of memory Result data of Open Vulnerability Assessment Language (OVAL) probes are kept in memory for the whole duration of a scan and the generation of reports is also a memory-intensive process. Consequently, when very large file systems are scanned, the oscap process can take all available memory and be killed by the operating system. To work around this problem, use tailoring to exclude rules that scan complete file systems and run them separately. Furthermore, do not use the --oval-results option. As a result, if you lower the amount of processed data, scanning of the system should no longer crash because of the excessive use of memory. (BZ#1548949)
[ "Error generating remediation role '.../remediation.sh': Exit code of 'oscap' was 1: [output truncated]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_security
Security and compliance
Security and compliance OpenShift Container Platform 4.15 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/index
Using jlink to customize Java runtime environment
Using jlink to customize Java runtime environment Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_jlink_to_customize_java_runtime_environment/index
Chapter 354. Twitter Search Component
Chapter 354. Twitter Search Component Available as of Camel version 2.10 The Twitter Search component consumes search results. 354.1. Component Options The Twitter Search component supports 9 options, which are listed below. Name Description Default Type accessToken (security) The access token String accessTokenSecret (security) The access token secret String consumerKey (security) The consumer key String consumerSecret (security) The consumer secret String httpProxyHost (proxy) The http proxy host which can be used for the camel-twitter. String httpProxyUser (proxy) The http proxy user which can be used for the camel-twitter. String httpProxyPassword (proxy) The http proxy password which can be used for the camel-twitter. String httpProxyPort (proxy) The http proxy port which can be used for the camel-twitter. int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean 354.2. Endpoint Options The Twitter Search endpoint is configured using URI syntax: with the following path and query parameters: 354.2.1. Path Parameters (1 parameters): Name Description Default Type keywords Required The search keywords. Multiple values can be separated with comma. String 354.2.2. Query Parameters (42 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean type (consumer) Endpoint type to use. Only streaming supports event type. polling EndpointType distanceMetric (consumer) Used by the non-stream geography search, to search by radius using the configured metrics. The unit can either be mi for miles, or km for kilometers. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. km String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern extendedMode (consumer) Used for enabling full text from twitter (eg receive tweets that contains more than 140 characters). true boolean latitude (consumer) Used by the non-stream geography search to search by latitude. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double locations (consumer) Bounding boxes, created by pairs of lat/lons. Can be used for streaming/filter. A pair is defined as lat,lon. And multiple paris can be separated by semi colon. String longitude (consumer) Used by the non-stream geography search to search by longitude. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy radius (consumer) Used by the non-stream geography search to search by radius. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double twitterStream (consumer) To use a custom instance of TwitterStream TwitterStream synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean count (filter) Limiting number of results per page. 5 Integer filterOld (filter) Filter out old tweets, that has previously been polled. This state is stored in memory only, and based on last tweet id. true boolean lang (filter) The lang string ISO_639-1 which will be used for searching String numberOfPages (filter) The number of pages result which you want camel-twitter to consume. 1 Integer sinceId (filter) The last tweet id which will be used for pulling the tweets. It is useful when the camel route is restarted after a long running. 1 long userIds (filter) To filter by user ids for streaming/filter. Multiple values can be separated by comma. String backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 30000 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean sortById (sort) Sorts by id, so the oldest are first, and newest last. true boolean httpProxyHost (proxy) The http proxy host which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String httpProxyPassword (proxy) The http proxy password which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String httpProxyPort (proxy) The http proxy port which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. Integer httpProxyUser (proxy) The http proxy user which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String accessToken (security) The access token. Can also be configured on the TwitterComponent level instead. String accessTokenSecret (security) The access secret. Can also be configured on the TwitterComponent level instead. String consumerKey (security) The consumer key. Can also be configured on the TwitterComponent level instead. String consumerSecret (security) The consumer secret. Can also be configured on the TwitterComponent level instead. String 354.3. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.twitter-search.access-token The access token String camel.component.twitter-search.access-token-secret The access token secret String camel.component.twitter-search.consumer-key The consumer key String camel.component.twitter-search.consumer-secret The consumer secret String camel.component.twitter-search.enabled Whether to enable auto configuration of the twitter-search component. This is enabled by default. Boolean camel.component.twitter-search.http-proxy-host The http proxy host which can be used for the camel-twitter. String camel.component.twitter-search.http-proxy-password The http proxy password which can be used for the camel-twitter. String camel.component.twitter-search.http-proxy-port The http proxy port which can be used for the camel-twitter. Integer camel.component.twitter-search.http-proxy-user The http proxy user which can be used for the camel-twitter. String camel.component.twitter-search.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
[ "twitter-search:keywords" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/twitter-search-component
Functions
Functions Red Hat OpenShift Serverless 1.35 Setting up and using OpenShift Serverless Functions Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/functions/index
Deploying and managing OpenShift Data Foundation using Google Cloud
Deploying and managing OpenShift Data Foundation using Google Cloud Red Hat OpenShift Data Foundation 4.9 Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Google Cloud. Important Deploying and managing OpenShift Data Foundation on Google Cloud is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Provide as much detail as possible so that your request can be addressed. Prerequisites You have a Red Hat account. You are logged in to your Red Hat account. Procedure To provide your feedback, click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide more details about the issue or enhancement in the Description text box. If your Red Hat user name does not automatically appear in the Reporter text box, enter it. Scroll to the bottom of the page and then click the Create button. A documentation issue is created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/proc-providing-feedback-on-redhat-documentation
Chapter 4. Fencing: Configuring STONITH
Chapter 4. Fencing: Configuring STONITH STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and it protects your data from being corrupted by rogue nodes or concurrent access. Just because a node is unresponsive, this does not mean it is not accessing your data. The only way to be 100% sure that your data is safe, is to fence the node using STONITH so we can be certain that the node is truly offline, before allowing the data to be accessed from another node. STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere. 4.1. Available STONITH (Fencing) Agents Use the following command to view of list of all available STONITH agents. You specify a filter, then this command displays only the STONITH agents that match the filter.
[ "pcs stonith list [ filter ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/ch-fencing-HAAR
Chapter 7. ConsoleQuickStart [console.openshift.io/v1]
Chapter 7. ConsoleQuickStart [console.openshift.io/v1] Description ConsoleQuickStart is an extension for guiding user through various workflows in the OpenShift web console. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleQuickStartSpec is the desired quick start configuration. 7.1.1. .spec Description ConsoleQuickStartSpec is the desired quick start configuration. Type object Required description displayName durationMinutes introduction tasks Property Type Description accessReviewResources array accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. accessReviewResources[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface conclusion string conclusion sums up the Quick Start and suggests the possible steps. (includes markdown) description string description is the description of the Quick Start. (includes markdown) displayName string displayName is the display name of the Quick Start. durationMinutes integer durationMinutes describes approximately how many minutes it will take to complete the Quick Start. icon string icon is a base64 encoded image that will be displayed beside the Quick Start display name. The icon should be an vector image for easy scaling. The size of the icon should be 40x40. introduction string introduction describes the purpose of the Quick Start. (includes markdown) nextQuickStart array (string) nextQuickStart is a list of the following Quick Starts, suggested for the user to try. prerequisites array (string) prerequisites contains all prerequisites that need to be met before taking a Quick Start. (includes markdown) tags array (string) tags is a list of strings that describe the Quick Start. tasks array tasks is the list of steps the user has to perform to complete the Quick Start. tasks[] object ConsoleQuickStartTask is a single step in a Quick Start. 7.1.2. .spec.accessReviewResources Description accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. Type array 7.1.3. .spec.accessReviewResources[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 7.1.4. .spec.tasks Description tasks is the list of steps the user has to perform to complete the Quick Start. Type array 7.1.5. .spec.tasks[] Description ConsoleQuickStartTask is a single step in a Quick Start. Type object Required description title Property Type Description description string description describes the steps needed to complete the task. (includes markdown) review object review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. summary object summary contains information about the passed step. title string title describes the task and is displayed as a step heading. 7.1.6. .spec.tasks[].review Description review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. Type object Required failedTaskHelp instructions Property Type Description failedTaskHelp string failedTaskHelp contains suggestions for a failed task review and is shown at the end of task. (includes markdown) instructions string instructions contains steps that user needs to take in order to validate his work after going through a task. (includes markdown) 7.1.7. .spec.tasks[].summary Description summary contains information about the passed step. Type object Required failed success Property Type Description failed string failed briefly describes the unsuccessfully passed task. (includes markdown) success string success describes the succesfully passed task. 7.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolequickstarts DELETE : delete collection of ConsoleQuickStart GET : list objects of kind ConsoleQuickStart POST : create a ConsoleQuickStart /apis/console.openshift.io/v1/consolequickstarts/{name} DELETE : delete a ConsoleQuickStart GET : read the specified ConsoleQuickStart PATCH : partially update the specified ConsoleQuickStart PUT : replace the specified ConsoleQuickStart 7.2.1. /apis/console.openshift.io/v1/consolequickstarts Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleQuickStart Table 7.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleQuickStart Table 7.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStartList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleQuickStart Table 7.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.7. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 202 - Accepted ConsoleQuickStart schema 401 - Unauthorized Empty 7.2.2. /apis/console.openshift.io/v1/consolequickstarts/{name} Table 7.9. Global path parameters Parameter Type Description name string name of the ConsoleQuickStart Table 7.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleQuickStart Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.12. Body parameters Parameter Type Description body DeleteOptions schema Table 7.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleQuickStart Table 7.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleQuickStart Table 7.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.17. Body parameters Parameter Type Description body Patch schema Table 7.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleQuickStart Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/console_apis/consolequickstart-console-openshift-io-v1
12.3.3. Stopping a Service
12.3.3. Stopping a Service To stop a running service, type the following at a shell prompt as root : service service_name stop For example, to stop the httpd service, type:
[ "~]# service httpd stop Stopping httpd: [ OK ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s3-services-running-stopping
Chapter 6. Synchronizing Active Directory and Identity Management Users
Chapter 6. Synchronizing Active Directory and Identity Management Users This chapter describes synchronization between Active Directory and Red Hat Enterprise Linux Identity Management. Synchronization is one of the two methods for indirect integration of the two environments. For details on the cross-forest trust, which is the other, recommended method, see Chapter 5, Creating Cross-forest Trusts with Active Directory and Identity Management . If you are unsure which method to choose for your environment, read Section 1.3, "Indirect Integration" . Identity Management uses synchronization to combine the user data stored in an Active Directory domain and the user data stored in the IdM domain. Critical user attributes, including passwords, are copied and synchronized between the services. Entry synchronization is performed through a process similar to replication, which uses hooks to connect to and retrieve directory data from the Windows server. Password synchronization is performed through a Windows service which is installed on the Windows server and then communicates to the Identity Management server. 6.1. Supported Windows Platforms Synchronization is supported with Active Directory forests that use the following forest and domain functional levels: Forest functional level range: Windows Server 2008 - Windows Server 2012 R2 Domain functional level range: Windows Server 2008 - Windows Server 2012 R2 The following operating systems are explicitly supported and tested for synchronization using the mentioned functional levels: Windows Server 2012 R2 Windows Server 2016 PassSync 1.1.5 or later is compatible with all supported Windows Server versions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/active-directory
Installing on Azure Stack Hub
Installing on Azure Stack Hub OpenShift Container Platform 4.15 Installing OpenShift Container Platform on Azure Stack Hub Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure_stack_hub/index
Chapter 337. Adding Security Definitions in API doc
Chapter 337. Adding Security Definitions in API doc Available as of Camel 3.1.0 The Rest DSL now supports declaring OpenApi securityDefinitions in the generated API document. For example as shown below: rest("/user").tag("dude").description("User rest service") // setup security definitions .securityDefinitions() .oauth2("petstore_auth").authorizationUrl("http://petstore.swagger.io/oauth/dialog").end() .apiKey("api_key").withHeader("myHeader").end() .end() .consumes("application/json").produces("application/json") Here we have setup two security definitions OAuth2 - with implicit authorization with the provided url Api Key - using an api key that comes from HTTP header named myHeader Then you need to specify on the rest operations which security to use by referring to their key (petstore_auth or api_key). .get("/{id}/{date}").description("Find user by id and date").outType(User.class) .security("api_key") ... .put().description("Updates or create a user").type(User.class) .security("petstore_auth", "write:pets,read:pets") Here the get operation is using the Api Key security and the put operation is using OAuth security with permitted scopes of read and write pets.
[ "rest(\"/user\").tag(\"dude\").description(\"User rest service\") // setup security definitions .securityDefinitions() .oauth2(\"petstore_auth\").authorizationUrl(\"http://petstore.swagger.io/oauth/dialog\").end() .apiKey(\"api_key\").withHeader(\"myHeader\").end() .end() .consumes(\"application/json\").produces(\"application/json\")", ".get(\"/{id}/{date}\").description(\"Find user by id and date\").outType(User.class) .security(\"api_key\") .put().description(\"Updates or create a user\").type(User.class) .security(\"petstore_auth\", \"write:pets,read:pets\")" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/adding_security_definitions_in_api_doc
probe::vm.brk
probe::vm.brk Name probe::vm.brk - Fires when a brk is requested (i.e. the heap will be resized) Synopsis vm.brk Values name name of the probe point address the requested address length the length of the memory segment Context The process calling brk.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-brk
4.185. mysql
4.185. mysql 4.185.1. RHSA-2012:0105 - Important: mysql security update Updated mysql packages that fix several security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. MySQL is a multi-user, multi-threaded SQL database server. It consists of the MySQL server daemon (mysqld) and many client programs and libraries. Bug Fixes CVE-2011-2262 , CVE-2012-0075 , CVE-2012-0087 , CVE-2012-0101 , CVE-2012-0102 , CVE-2012-0112 , CVE-2012-0113 , CVE-2012-0114 , CVE-2012-0115 , CVE-2012-0116 , CVE-2012-0118 , CVE-2012-0119 , CVE-2012-0120 , CVE-2012-0484 , CVE-2012-0485 , CVE-2012-0490 , CVE-2012-0492 This update fixes several vulnerabilities in the MySQL database server. Information about these flaws can be found on the Oracle Critical Patch Update Advisory page. These updated packages upgrade MySQL to version 5.1.61. Refer to the MySQL release notes for a full list of changes: http://dev.mysql.com/doc/refman/5.1/en/news-5-1-x.html All MySQL users should upgrade to these updated packages, which correct these issues. After installing this update, the MySQL server daemon (mysqld) will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/mysql
Chapter 6. Understanding OpenShift Container Platform development
Chapter 6. Understanding OpenShift Container Platform development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform. It also describes which approaches you might use for different kinds of applications and development requirements. 6.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 6.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 6.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. For more details on the process of building container images, pushing them to registries, and running them, see Custom image builds with Buildah . 6.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine in an OpenShift Container Platform cluster, but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform. 6.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the OpenShift Container Platform web UI by selecting Catalog Developer Catalog , as shown in the following figure: Figure 6.2. Choose S2I base images for apps that need specific runtimes 6.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including container images and the Operators that are used to deploy OpenShift Container Platform itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, OpenShift Container Platform itself includes a private container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform, while other credentials can be assigned to individuals. 6.3. Creating a Kubernetes manifest for OpenShift Container Platform While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as OpenShift Container Platform. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 6.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 6.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable OpenShift Container Platform objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment . Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 6.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the OpenShift Container Platform web console: OperatorHub, which is available in each OpenShift Container Platform 4.10 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 6.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 6.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 6.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/architecture/understanding-development
Installing on IBM Power Virtual Server
Installing on IBM Power Virtual Server OpenShift Container Platform 4.16 Installing OpenShift Container Platform on IBM Power Virtual Server Red Hat OpenShift Documentation Team
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ibmcloud plugin install cis", "ibmcloud login", "ibmcloud cis instance-create <instance_name> standard-next 1", "ibmcloud cis instance-set <instance_CRN> 1", "ibmcloud cis domain-add <domain_name> 1", "ibmcloud resource service-instance <workspace name>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: \"ibmcloud-resource-group\" 10 serviceInstanceGUID: \"powervs-region-service-instance-guid\" vpcRegion : vpc-region publish: External pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 vpcSubnets: 12 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" credentialsMode: Manual publish: External 13 pullSecret: '{\"auths\": ...}' 14 fips: false sshKey: ssh-ed25519 AAAA... 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 11 vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" publish: Internal 12 pullSecret: '{\"auths\": ...}' 13 sshKey: ssh-ed25519 AAAA... 14", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "vpcName: <existing_vpc> vpcSubnets: <vpcSubnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: powervs: smtLevel: 8 5 replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: powervs: smtLevel: 8 9 ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 11 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 12 networkType: OVNKubernetes 13 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" 14 region: \"powervs-region\" vpcRegion: \"vpc-region\" vpcName: name-of-existing-vpc 15 vpcSubnets: 16 - name-of-existing-vpc-subnet zone: \"powervs-zone\" serviceInstanceID: \"service-instance-id\" publish: Internal credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 17 sshKey: ssh-ed25519 AAAA... 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "ibmcloud is volumes --resource-group-name <infrastructure_id>", "ibmcloud is volume-delete --force <volume_id>", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "platform: powervs: userID:", "platform: powervs: powervsResourceGroup:", "platform: powervs: region:", "platform: powervs: zone:", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: smtLevel:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "platform: powervs: vpcRegion:", "platform: powervs: vpcSubnets:", "platform: powervs: vpcName:", "platform: powervs: serviceInstanceGUID:", "platform: powervs: clusterOSImage:", "platform: powervs: defaultMachinePlatform:", "platform: powervs: memoryGiB:", "platform: powervs: procType:", "platform: powervs: processors:", "platform: powervs: sysType:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_on_ibm_power_virtual_server/index
Chapter 6. Security
Chapter 6. Security AMQ Broker 7 provides transport layer security to secure incoming network connections, and authorization to secure access to queues based on their respective addresses. In both of these areas, the security model is very similar to AMQ 6. However, the configuration processes are different. 6.1. How Transport Layer Security is Configured Like AMQ 6, AMQ Broker 7 enables you to secure incoming network connections using SSL/TLS. However, there are some differences in configuration syntax and configuration properties. In AMQ 6, transport layer security was configured by creating an SSL context to define the keystores and truststores, and then adding SSL attributes to each transport connector that you wanted to secure. In AMQ Broker 7, the transport layer is based on Netty, which uses SSL natively. This means that to configure transport layer security, you just add the necessary SSL attributes to each acceptor that you want to secure. You do not need to add a separate SSL context. For example, the following configuration accepts secure connections from an OpenWire client: In AMQ 6 Define the SSL context in the INSTALL_DIR /etc/activemq.xml file: <sslContext> <sslContext keyStore="file:USD{activemq.conf}/broker.ks" keyStorePassword="password"/> </sslContext> In the broker configuration file, create a transport connector to accept secure connections from the OpenWire client: <transportConnector name="ssl" uri="ssl://localhost:61617?transport.needClientAuth=true"/> In AMQ Broker 7 In the BROKER_INSTANCE_DIR /etc/broker.xml configuration file, create or update an acceptor to accept secure connections from the OpenWire client: <acceptor name="netty-ssl-acceptor">tcp://localhost:61617?sslEnabled=true;keyStorePath=USD{data.dir}/../etc/broker.ks;keyStorePassword=password;needClientAuth=true</acceptor> You can configure either one-way or two-way TLS. The following table describes these methods: Method Description One-way TLS Only the broker presents a certificate. This method requires you to have a Java KeyStore for the server-side certificates. For more information, see Securing connections in Configuring AMQ Broker . Two-way TLS (mutual authentication) Both the broker and the client present certificates. This method requires you to have a Java KeyStore for the server-side certificates, and a TrustStore that holds the keys of the clients that the broker trusts. For more information, see Securing connections in Configuring AMQ Broker . Note To reuse your existing keystores and truststores for AMQ Broker 7, copy them to your AMQ Broker 7 broker instance. Related Information For a full list of all transport layer security configuration properties, see Netty TLS Parameters in Configuring AMQ Broker . 6.2. Authorization AMQ Broker 7 provides a role-based security model in which you apply security settings to queues based on their addresses. This security model is similar to AMQ 6; however, the permissions and wildcard syntax are different, and authorization is configured differently. 6.2.1. Authorization Changes AMQ Broker 7 uses a different set of permissions and a slightly different wildcard syntax than AMQ 6. The following table describes the different types of permissions that you can apply in AMQ 6 and AMQ Broker 7: Permission in AMQ 6 Corresponding Permissions in AMQ Broker 7 write send read consume browse admin createAddress deleteAddress createNonDurableQueue deleteNonDurableQueue createDurableQueue deleteDurableQueue manage For more information about permissions in AMQ Broker 7, see Configuring user- and role-based authorization in Configuring AMQ Broker . The wildcard syntax for matching addresses is also different in AMQ Broker 7. To... In AMQ 6 In AMQ Broker 7 Separate words in the path . . Match a single word * * Match any word recursively > # 6.2.2. How Authorization is Configured You use the BROKER_INSTANCE_DIR /etc/broker.xml configuration file to assign security settings to queues. The broker.xml configuration file contains the following default security settings, which provide complete access to all addresses and queues for the default role that you created when you created the broker instance: <configuration ...> <core ...> ... <security-settings> <security-setting match="#"> 1 <permission type="createNonDurableQueue" roles="admin"/> 2 <permission type="deleteNonDurableQueue" roles="admin"/> <permission type="createDurableQueue" roles="admin"/> <permission type="deleteDurableQueue" roles="admin"/> <permission type="createAddress" roles="admin"/> <permission type="deleteAddress" roles="admin"/> <permission type="consume" roles="admin"/> <permission type="browse" roles="admin"/> <permission type="send" roles="admin"/> <permission type="manage" roles="admin"/> </security-setting> </security-settings> ... </core> </configuration> 1 The address or address prefix to which a set of security permissions are applied. The permissions are applied to the set of queues that match the address. In this example, the # wildcard matches all addresses. 2 A permission granted to a role. In this example, all users belonging to the admin role are granted permission to create non-durable queues. You can configure authorization for a queue or set of queues by specifying an address that matches the queues, and then specifying the roles that should be granted each permission type. Related Information Configuring user- and role-based authorization in Configuring AMQ Broker
[ "<sslContext> <sslContext keyStore=\"file:USD{activemq.conf}/broker.ks\" keyStorePassword=\"password\"/> </sslContext>", "<transportConnector name=\"ssl\" uri=\"ssl://localhost:61617?transport.needClientAuth=true\"/>", "<acceptor name=\"netty-ssl-acceptor\">tcp://localhost:61617?sslEnabled=true;keyStorePath=USD{data.dir}/../etc/broker.ks;keyStorePassword=password;needClientAuth=true</acceptor>", "<configuration ...> <core ...> <security-settings> <security-setting match=\"#\"> 1 <permission type=\"createNonDurableQueue\" roles=\"admin\"/> 2 <permission type=\"deleteNonDurableQueue\" roles=\"admin\"/> <permission type=\"createDurableQueue\" roles=\"admin\"/> <permission type=\"deleteDurableQueue\" roles=\"admin\"/> <permission type=\"createAddress\" roles=\"admin\"/> <permission type=\"deleteAddress\" roles=\"admin\"/> <permission type=\"consume\" roles=\"admin\"/> <permission type=\"browse\" roles=\"admin\"/> <permission type=\"send\" roles=\"admin\"/> <permission type=\"manage\" roles=\"admin\"/> </security-setting> </security-settings> </core> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/migrating_to_red_hat_amq_7/security
Chapter 3. Launching a JBoss EAP instance
Chapter 3. Launching a JBoss EAP instance The following procedures show launching a public JBoss EAP instance from the Amazon Web Services (AWS) marketplace and launching a JBoss EAP instance on Amazon EC2 Console. 3.1. Launching a JBoss EAP instance from the AWS marketplace The public JBoss EAP Amazon Machine Image (AMI), offered with the pay-as-you-go (PAYG) model, is available at the Amazon Web Services (AWS) marketplace. Prerequisite You have an AWS account. The Amazon Web Services CLI is installed and configured with your account credentials. Procedure Go to AWS marketplace at the URL: https://aws.amazon.com/marketplace . Search for "JBoss EAP" in the search bar. Filter the results by Publisher , selecting Red Hat Limited and Red Hat . Click the image you want to launch. Note If you are based in Europe, the Middle East, or Africa, select the image from the publisher "Red Hat Limited" , otherwise select the image from the publisher "Red Hat" . You are redirected to the software subscription page. Select the subscription settings and click Continue to Subscribe . Accept the terms by clicking Accept Terms , click Continue to Configuration . You are redirected to the configuration page. Select the configuration options and click Continue to Launch . You are directed to launch the software page. Review the launch configuration details and launch the instance by clicking Launch . 3.2. Launching JBoss EAP instance from private AMI using AWS EC2 Console You can launch a JBoss EAP instance on Amazon EC2 using the EC2 console. You can also launch an instance using the AWS Command Line Interface. See AWS CLI for more information. Prerequisites You have a Red Hat subscription. You have an AWS account. The Amazon Web Services CLI is installed and configured with your account credentials. Procedure Open the Amazon EC2 console . From the Amazon EC2 console, click AMIs . Search for jbeap AMI in Private images ,located in the Amazon Machine Images (AMIs) panel, and select the AMI. For example, RHEL-9-JBEAP-8.0.0_HVM_GA-20240909-x86_64-0-Access2-GP2 . Choose an instance type. See Supported Amazon EC2 Instance Types for more information on supported Amazon EC2 instance types. In the Configure Instance Details section, configure the instance settings. In the Advanced Details section, User data box, you can paste the sample script to run JBoss EAP when the instance is launched. Note If required, you can specify the storage, tag the instance, and configure the security group details. Click Review and Launch . This takes you directly to the Review Instance Launch page. Click Launch to choose a key pair and launch the instance. Note If you have not selected a key pair, you need to specify a key pair before you launch an instance.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/deploying_jboss_eap_on_amazon_web_services/assembly-launching-eap-instance-amazon-ec2_default
Chapter 4. Deploying AMQ Streams from the OperatorHub
Chapter 4. Deploying AMQ Streams from the OperatorHub Use the Red Hat Integration - AMQ Streams Operator to deploy AMQ Streams from the OperatorHub. The procedures in this section show how to: Deploy the AMQ Streams Operator from the OperatorHub Deploy Kafka components using the AMQ Streams Operator 4.1. Using the Red Hat Integration Operator to install the AMQ Streams Operator The Red Hat Integration Operator allows you to choose and install the Operators that manage your Red Hat Integration components. If you have more than one Red Hat Integration subscription, you can use the Red Hat Integration Operator to install and update the AMQ Streams Operator, as well as the Operators for all subscribed Red Hat Integration components. As with the AMQ Streams Operator, you can use the Operator Lifecycle Manager (OLM) to install the Red Hat Integration Operator on an OpenShift Container Platform (OCP) cluster from the OperatorHub in the OCP console. Additional resources For more information on installing and using the Red Hat Integration Operator, see Installing the Red Hat Integration Operator on OpenShift . 4.2. Deploying the AMQ Streams Operator from the OperatorHub You can deploy the Cluster Operator to your OpenShift cluster by installing the AMQ Streams Operator from the OperatorHub. Warning Make sure you use the appropriate update channel. If you are on a supported version of the OpenShift, installing AMQ Streams from the default stable channel is safe . However, if you are using a version of the OpenShift that is unsupported, installing AMQ Streams from the stable channel is unsafe , especially when automatic updates are enabled, as the cluster will receive automatic updates with new components that are unsupported by the OpenShift release. Prerequisites The Red Hat Operators OperatorSource is enabled in your OpenShift cluster. If you can see Red Hat Operators in the OperatorHub, the correct OperatorSource is enabled. For more information, see the Operators guide. Installation requires a user with sufficient privileges to install Operators from the OperatorHub. Procedure In the OpenShift web console, click Operators > OperatorHub . Search or browse for the AMQ Streams Operator, in the Streaming & Messaging category. Click the Red Hat Integration - AMQ Streams Operator tile and then, in the sidebar on the right, click Install . On the Create Operator Subscription screen, choose from the following installation and update options: Update Channel : Choose the update channel for the AMQ Streams Operator. The (default) stable channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable. An amq-streams- X .x channel contains the minor and micro release updates for a major release, where X is the major release version number. An amq-streams- X.Y .x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number. Installation Mode : Choose to install the AMQ Streams Operator to all namespaces in the cluster (the default option) or a specific namespace. It is good practice to use namespaces to separate functions. We recommend that you dedicate a specific namespace to the Kafka cluster and other AMQ Streams components. Approval Strategy : By default, the AMQ Streams Operator is automatically upgraded to the latest AMQ Streams version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information, see the Operators guide in the OpenShift documentation. Click Subscribe ; the AMQ Streams Operator is installed to your OpenShift cluster. The AMQ Streams Operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace, or to all namespaces. On the Installed Operators screen, check the progress of the installation. The AMQ Streams Operator is ready to use when its status changes to InstallSucceeded . , you can use the AMQ Streams Operator to deploy the Kafka components, starting with a Kafka cluster. Additional resources Section 4.3, "Deploying Kafka components using the AMQ Streams Operator" Section 1.4, "AMQ Streams installation methods" Section 5.1.2.1, "Deploying the Kafka cluster" 4.3. Deploying Kafka components using the AMQ Streams Operator When installed on an Openshift Container Platform, the AMQ Streams Operator makes Kafka components available for installation from the user interface. Kafka components available for installation: Kafka Kafka Connect Kafka Connect Source to Image (S2I) Kafka MirrorMaker Kafka MirrorMaker 2 Kafka Topic Kafka User Kafka Bridge Kafka Connector Kafka Rebalance Prerequisites AMQ Streams Operator is installed on the OpenShift Container Platform (OCP) cluster Procedure Navigate to Installed Operators and click Red Hat Integration - AMQ Streams Operator to display the Operator details page. From Provided APIs , click Create Instance for the Kafka component you wish to install. The default configuration for each component is encapsulated in a CRD spec property. (Optional) Configure the installation specification from the form or YAML views before you perform the installation. Click Create to start the installation of the selected component. Wait until the status changes to Succeeded . Additional resources Section 4.2, "Deploying the AMQ Streams Operator from the OperatorHub"
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/deploying_and_upgrading_amq_streams_on_openshift/operator-hub-str
Chapter 2. Managing compute machines with the Machine API
Chapter 2. Managing compute machines with the Machine API 2.1. Creating a compute machine set on AWS You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.1.1. Sample YAML for a compute machine set custom resource on AWS The sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) Local Zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the infrastructure ID, role node label, and zone. 3 Specify the role node label to add. 4 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 5 Specify the zone name, for example, us-east-1a . 6 Specify the region, for example, us-east-1 . 7 Specify the infrastructure ID and zone. 8 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 2.1.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets. Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.1.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.1.4. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group. EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group. Prerequisites You created a placement group in the AWS console. Note Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6 # ... 1 Specify an instance type that supports EFAs . 2 Specify the EFA network interface type. 3 Specify the zone, for example, us-east-1a . 4 Specify the region, for example, us-east-1 . 5 Specify the name of the existing AWS placement group to deploy machines in. 6 Optional: Specify the partition number of the existing AWS placement group to deploy machines in. Verification In the AWS console, find a machine that the machine set created and verify the following in the machine properties: The placement group field has the value that you specified for the placementGroupName parameter in the machine set. The partition number field has the value that you specified for the placementGroupPartition parameter in the machine set. The interface type field indicates that it uses an EFA. 2.1.5. Machine set options for the Amazon EC2 Instance Metadata Service You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2. Note Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later. To deploy new compute machines with your preferred IMDS configuration, create a compute machine set YAML file with the appropriate values. You can also edit an existing machine set to create new machines with your preferred IMDS configuration when the machine set is scaled up. Important Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. 2.1.5.1. Configuring IMDS by using machine sets You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines. Prerequisites To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later. Procedure Add or edit the following lines under the providerSpec field: providerSpec: value: metadataServiceOptions: authentication: Required 1 1 To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. 2.1.6. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 2.1.6.1. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 2.1.7. Machine sets that deploy machines as Spot Instances You can save on costs by creating a compute machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when AWS issues the termination warning. Interruptions can occur when using Spot Instances for the following reasons: The instance price exceeds your maximum price The demand for Spot Instances increases The supply of Spot Instances decreases When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot Instance. 2.1.7.1. Creating Spot Instances by using compute machine sets You can launch a Spot Instance on AWS by adding spotMarketOptions to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotMarketOptions: {} You can optionally set the spotMarketOptions.maxPrice field to limit the cost of the Spot Instance. For example you can set maxPrice: '2.50' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price. Note It is strongly recommended to use the default On-Demand price as the maxPrice value and to not set the maximum price for Spot Instances. 2.1.8. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the AWS EC2 cloud provider. For more information about the supported instance types, see the following NVIDIA documentation: NVIDIA GPU Operator Community support matrix NVIDIA AI Enterprise support matrix Procedure View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific AWS region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.30.3 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.30.3 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.30.3 View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones. USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h View the machines that exist in the openshift-machine-api namespace by running the following command. At this time, there is only one compute machine per machine set, though a compute machine set could be scaled to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api | grep worker Example output preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json> Edit the JSON file and make the following changes to the new MachineSet definition: Replace worker with gpu . This will be the name of the new machine set. Change the instance type of the new MachineSet definition to g4dn , which includes an NVIDIA Tesla T4 GPU. To learn more about AWS g4dn instance types, see Accelerated Computing . USD jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json "g4dn.xlarge" The <output_file.json> file is saved as preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json . Update the following fields in preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json : .metadata.name to a name containing gpu . .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . .spec.template.spec.providerSpec.value.instanceType to g4dn.xlarge . To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json - Example output 10c10 < "name": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a", --- > "name": "preserve-dsoc12r4-ktjfc-worker-us-east-2a", 21c21 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 31c31 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 60c60 < "instanceType": "g4dn.xlarge", --- > "instanceType": "m5.xlarge", Create the GPU-enabled compute machine set from the definition by running the following command: USD oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json Example output machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created Verification View the machine set you created by running the following command: USD oc -n openshift-machine-api get machinesets | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. 2.1.9. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. 2.2. Creating a compute machine set on Azure You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.2.1. Sample YAML for a compute machine set custom resource on Azure This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 Specify the node label to add. 3 Specify the infrastructure ID, node label, and region. 4 Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering". 5 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 6 Specify the region to place machines on. 7 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 8 Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify. Important If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region. 2.2.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.2.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.2.4. Using the Azure Marketplace offering You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700 2.2.5. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 2.2.6. Machine sets that deploy machines as Spot VMs You can save on costs by creating a compute machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when Azure issues the termination warning. Interruptions can occur when using Spot VMs for the following reasons: The instance price exceeds your maximum price The supply of Spot VMs decreases Azure needs capacity back When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot VM. 2.2.6.1. Creating Spot VMs by using compute machine sets You can launch a Spot VM on Azure by adding spotVMOptions to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotVMOptions: {} You can optionally set the spotVMOptions.maxPrice field to limit the cost of the Spot VM. For example you can set maxPrice: '0.98765' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to -1 and charges up to the standard VM price. Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default maxPrice . However, an instance can still be evicted due to capacity restrictions. Note It is strongly recommended to use the default standard VM price as the maxPrice value and to not set the maximum price for Spot VMs. 2.2.7. Machine sets that deploy machines on Ephemeral OS disks You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging. Additional resources For more information, see the Microsoft Azure documentation about Ephemeral OS disks for Azure VMs . 2.2.7.1. Creating machines on Ephemeral OS disks by using compute machine sets You can launch machines on Ephemeral OS disks on Azure by editing your compute machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Edit the custom resource (CR) by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the compute machine set that you want to provision machines on Ephemeral OS disks. Add the following to the providerSpec field: providerSpec: value: ... osDisk: ... diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4 ... 1 2 3 These lines enable the use of Ephemeral OS disks. 4 Ephemeral OS disks are only supported for VMs or scale set instances that use the Standard LRS storage account type. Important The implementation of Ephemeral OS disk support in OpenShift Container Platform only supports the CacheDisk placement type. Do not change the placement configuration setting. Create a compute machine set using the updated configuration: USD oc create -f <machine-set-config>.yaml Verification On the Microsoft Azure portal, review the Overview page for a machine deployed by the compute machine set, and verify that the Ephemeral OS disk field is set to OS cache placement . 2.2.8. Machine sets that deploy machines with ultra disks as data disks You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. You can also create a persistent volume claim (PVC) that dynamically binds to a storage class backed by Azure ultra disks and mounts them to pods. Note Data disks do not support the ability to specify disk throughput or disk IOPS. You can configure these properties by using PVCs. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using CSI PVCs Machine sets that deploy machines on ultra disks using in-tree PVCs 2.2.8.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Create a custom secret in the openshift-machine-api namespace using the worker data secret by running the following command: USD oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2 1 Replace <role> with worker . 2 Specify userData.txt as the name of the new custom secret. In a text editor, open the userData.txt file and locate the final } character in the file. On the immediately preceding line, add a , . Create a new line after the , and add the following configuration details: "storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] } 1 The configuration details for the disk that you want to attach to a node as an ultra disk. 2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0 , specify lun0 . You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set. 3 The configuration details for a new partition on the disk. 4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0 . 5 Specify the total size in MiB of the partition. 6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. 7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each. 8 For Where , specify the value of storage.filesystems.path . For What , specify the value of storage.filesystems.device . Extract the disabling template value to a file called disableTemplating.txt by running the following command: USD oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt 1 Replace <role> with worker . Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command: USD oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt 1 For <role>-user-data-x5 , specify the name of the secret. Replace <role> with worker . Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 3 These lines enable the use of ultra disks. For dataDisks , include the entire stanza. 4 Specify the user data secret created earlier. Replace <role> with worker . Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 2.2.8.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 2.2.8.2.1. Incorrect ultra disk configuration If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails. For example, if the ultraSSDCapability parameter is set to Disabled , but an ultra disk is specified in the dataDisks parameter, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, verify that your machine set configuration is correct. 2.2.8.2.2. Unsupported disk parameters If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message: failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>." To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct. 2.2.8.2.3. Unable to delete disks If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired. 2.2.9. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 2.2.10. Configuring trusted launch for Azure virtual machines by using machine sets Important Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.17 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Note Some feature combinations result in an invalid configuration. Table 2.1. UEFI feature combination compatibility Secure Boot [1] vTPM [2] Valid configuration Enabled Enabled Yes Enabled Disabled Yes Enabled Omitted Yes Disabled Enabled Yes Omitted Enabled Yes Disabled Disabled No Omitted Disabled No Omitted Omitted No Using the secureBoot field. Using the virtualizedTrustedPlatformModule field. For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field to provide a valid configuration: Sample valid configuration with UEFI Secure Boot and vTPM enabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. 2 Specifies which UEFI security features to use. This section is required for all valid configurations. 3 Enables UEFI Secure Boot. 4 Enables the use of a vTPM. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. 2.2.11. Configuring Azure confidential virtual machines by using machine sets Important Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.17 supports Azure confidential virtual machines (VMs). Note Confidential VMs are currently not supported on 64-bit ARM architectures. By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: osDisk: # ... managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # ... securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8 # ... 1 Specifies security profile settings for the managed disk when using a confidential VM. 2 Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. 3 Specifies security profile settings for the confidential VM. 4 Enables the use of confidential VMs. This value is required for all valid configurations. 5 Specifies which UEFI security features to use. This section is required for all valid configurations. 6 Disables UEFI Secure Boot. 7 Enables the use of a vTPM. 8 Specifies an instance type that supports confidential VMs. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. 2.2.12. Accelerated Networking for Microsoft Azure VMs Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled during or after installation. 2.2.12.1. Limitations Consider the following limitations when deciding whether to use Accelerated Networking: Accelerated Networking is only supported on clusters where the Machine API is operational. Although the minimum requirement for an Azure worker node is two vCPUs, Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation . When this feature is enabled on an existing Azure cluster, only newly provisioned nodes are affected. Currently running nodes are not reconciled. To enable the feature on all nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas. 2.2.13. Configuring Capacity Reservation by using machine sets OpenShift Container Platform version 4.17 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds. For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation . Note You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the machine set deployed. Prerequisites You have access to the cluster with cluster-admin privileges. You installed the OpenShift CLI ( oc ). You created a Capacity Reservation group. For more information, see the Microsoft Azure documentation Create a Capacity Reservation . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1 # ... 1 Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on. Verification To verify machine deployment, list the machines that the machine set created by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> where <machine_set_name> is the name of the compute machine set. In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation. 2.2.14. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the Azure cloud provider. The following table lists the validated instance types: vmSize NVIDIA GPU accelerator Maximum number of GPUs Architecture Standard_NC24s_v3 V100 4 x86 Standard_NC4as_T4_v3 T4 1 x86 ND A100 v4 A100 8 x86 Note By default, Azure subscriptions do not have a quota for the Azure instance types with GPU. Customers have to request a quota increase for the Azure instance families listed above. Procedure View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones. USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m Make a copy of one of the existing compute MachineSet definitions and output the result to a YAML file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml View the content of the machineset: USD cat machineset-azure.yaml Example machineset-azure.yaml file apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "0" machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T14:08:19Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: "23601" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1 Make a copy of the machineset-azure.yaml file by running the following command: USD cp machineset-azure.yaml machineset-azure-gpu.yaml Update the following fields in machineset-azure-gpu.yaml : Change .metadata.name to a name containing gpu . Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name. Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.spec.providerSpec.value.vmSize to Standard_NC4as_T4_v3 . Example machineset-azure-gpu.yaml file apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "1" machine.openshift.io/memoryMb: "28672" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T20:27:12Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: "166285" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1 To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD diff machineset-azure.yaml machineset-azure-gpu.yaml Example output 14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3 Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f machineset-azure-gpu.yaml Example output machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones. USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific Azure region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.30.3 myclustername-master-1 Ready control-plane,master 6h41m v1.30.3 myclustername-master-2 Ready control-plane,master 6h39m v1.30.3 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.30.3 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.30.3 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.30.3 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.30.3 View the list of compute machine sets: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f machineset-azure-gpu.yaml View the list of compute machine sets: oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h Verification View the machine set you created by running the following command: USD oc get machineset -n openshift-machine-api | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m Note There is no need to specify a namespace for the node. The node definition is cluster scoped. 2.2.15. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. Additional resources Enabling Accelerated Networking during installation 2.2.15.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster where the Machine API is operational. Procedure Add the following to the providerSpec field: providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2 1 This line enables Accelerated Networking. 2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation . steps To enable the feature on currently running nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas. Verification On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled . Additional resources Manually scaling a compute machine set 2.3. Creating a compute machine set on Azure Stack Hub You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure Stack Hub. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.3.1. Sample YAML for a compute machine set custom resource on Azure Stack Hub This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: "1" 21 1 5 7 13 15 16 17 20 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 18 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID, node label, and region. 14 Specify the region to place machines on. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 12 Specify the availability set for the cluster. 2.3.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Create an availability set in which to deploy Azure Stack Hub compute machines. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <availabilitySet> , <clusterID> , and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.3.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.3.4. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure Stack Hub cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 2.3.5. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 2.4. Creating a compute machine set on GCP You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.4.1. Sample YAML for a compute machine set custom resource on GCP This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" , where <role> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a Sample GCP MachineSet values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 For <node> , specify the node label to add. 3 Specify the path to the image that is used in current compute machine sets. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 2.4.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.4.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.4.4. Configuring persistent disk types by using machine sets You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file. For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following line under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1 1 Specify the persistent disk type. Valid values are pd-ssd , pd-standard , and pd-balanced . The default value is pd-standard . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type. 2.4.5. Configuring Confidential VM by using machine sets By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys. For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM . Note Confidential VMs are currently not supported on 64-bit ARM architectures. Important OpenShift Container Platform 4.17 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP). Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ... 1 Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled . 2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VM does not support live VM migration. 3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types. Verification On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured. 2.4.6. Machine sets that deploy machines as preemptible VM instances You can save on costs by creating a compute machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED state by Compute Engine. Interruptions can occur when using preemptible VM instances for the following reasons: There is a system or maintenance event The supply of preemptible VM instances decreases The instance reaches the end of the allotted 24-hour period for preemptible VM instances When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a preemptible VM instance. 2.4.6.1. Creating preemptible VM instances by using compute machine sets You can launch a preemptible VM instance on GCP by adding preemptible to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: preemptible: true If preemptible is set to true , the machine is labelled as an interruptable-instance after the instance is launched. 2.4.7. Configuring Shielded VM options by using machine sets By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys. For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 In this section, specify any Shielded VM options that you want. 2 Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled . Note When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM). 3 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled . 4 Specify whether vTPM is enabled. Valid values are Disabled or Enabled . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured. Additional resources What is Shielded VM? Secure Boot Virtual Trusted Platform Module (vTPM) Integrity monitoring 2.4.8. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location: USD gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 2.4.9. Enabling GPU support for a compute machine set Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OpenShift Container Platform on GCP supports NVIDIA GPU models in the A2 and N1 machine series. Table 2.2. Supported GPU configurations Model name GPU type Machine types [1] NVIDIA A100 nvidia-tesla-a100 a2-highgpu-1g a2-highgpu-2g a2-highgpu-4g a2-highgpu-8g a2-megagpu-16g NVIDIA K80 nvidia-tesla-k80 n1-standard-1 n1-standard-2 n1-standard-4 n1-standard-8 n1-standard-16 n1-standard-32 n1-standard-64 n1-standard-96 n1-highmem-2 n1-highmem-4 n1-highmem-8 n1-highmem-16 n1-highmem-32 n1-highmem-64 n1-highmem-96 n1-highcpu-2 n1-highcpu-4 n1-highcpu-8 n1-highcpu-16 n1-highcpu-32 n1-highcpu-64 n1-highcpu-96 NVIDIA P100 nvidia-tesla-p100 NVIDIA P4 nvidia-tesla-p4 NVIDIA T4 nvidia-tesla-t4 NVIDIA V100 nvidia-tesla-v100 For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about N1 machine series , A2 machine series , and GPU regions and zones availability . You can define which supported GPU to use for an instance by using the Machine API. You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators. Note GPUs for graphics workloads are not supported. Procedure In a text editor, open the YAML file for an existing compute machine set or create a new one. Specify a GPU configuration under the providerSpec field in your compute machine set YAML file. See the following examples of valid configurations: Example configuration for the A2 machine series providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3 1 Specify the machine type. Ensure that the machine type is included in the A2 machine series. 2 When using GPU support, you must set onHostMaintenance to Terminate . 3 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never . Example configuration for the N1 machine series providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5 1 Specify the number of GPUs to attach to the machine. 2 Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible. 3 Specify the machine type. Ensure that the machine type and GPU type are compatible. 4 When using GPU support, you must set onHostMaintenance to Terminate . 5 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never . 2.4.10. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the GCP cloud provider. The following table lists the validated instance types: Instance type NVIDIA GPU accelerator Maximum number of GPUs Architecture a2-highgpu-1g A100 1 x86 n1-standard-4 T4 1 x86 Procedure Make a copy of an existing MachineSet . In the new copy, change the machine set name in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset . Change the instance type to add the following two lines to the newly copied MachineSet : Example a2-highgpu-1g.json file { "apiVersion": "machine.openshift.io/v1beta1", "kind": "MachineSet", "metadata": { "annotations": { "machine.openshift.io/GPU": "0", "machine.openshift.io/memoryMb": "16384", "machine.openshift.io/vCPU": "4" }, "creationTimestamp": "2023-01-13T17:11:02Z", "generation": 1, "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p" }, "name": "myclustername-2pt9p-worker-gpu-a", "namespace": "openshift-machine-api", "resourceVersion": "20185", "uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd" }, "spec": { "replicas": 1, "selector": { "matchLabels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "template": { "metadata": { "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machine-role": "worker", "machine.openshift.io/cluster-api-machine-type": "worker", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "spec": { "lifecycleHooks": {}, "metadata": {}, "providerSpec": { "value": { "apiVersion": "machine.openshift.io/v1beta1", "canIPForward": false, "credentialsSecret": { "name": "gcp-cloud-credentials" }, "deletionProtection": false, "disks": [ { "autoDelete": true, "boot": true, "image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64", "labels": null, "sizeGb": 128, "type": "pd-ssd" } ], "kind": "GCPMachineProviderSpec", "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", "metadata": { "creationTimestamp": null }, "networkInterfaces": [ { "network": "myclustername-2pt9p-network", "subnetwork": "myclustername-2pt9p-worker-subnet" } ], "preemptible": true, "projectID": "myteam", "region": "us-central1", "serviceAccounts": [ { "email": "[email protected]", "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] } ], "tags": [ "myclustername-2pt9p-worker" ], "userDataSecret": { "name": "worker-user-data" }, "zone": "us-central1-a" } } } } }, "status": { "availableReplicas": 1, "fullyLabeledReplicas": 1, "observedGeneration": 1, "readyReplicas": 1, "replicas": 1 } } View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific GCP region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.30.3 View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the GCP region. The installer automatically load balances compute machines across availability zones. USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api | grep worker Example output myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json> Edit the JSON file to make the following changes to the new MachineSet definition: Rename the machine set name by inserting the substring gpu in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset . Change the machineType of the new MachineSet definition to a2-highgpu-1g , which includes an NVIDIA A100 GPU. jq .spec.template.spec.providerSpec.value.machineType ocp_4.17_machineset-a2-highgpu-1g.json "a2-highgpu-1g" The <output_file.json> file is saved as ocp_4.17_machineset-a2-highgpu-1g.json . Update the following fields in ocp_4.17_machineset-a2-highgpu-1g.json : Change .metadata.name to a name containing gpu . Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.spec.providerSpec.value.MachineType to a2-highgpu-1g . Add the following line under machineType : `"onHostMaintenance": "Terminate". For example: "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.17_machineset-a2-highgpu-1g.json - Example output 15c15 < "name": "myclustername-2pt9p-worker-gpu-a", --- > "name": "myclustername-2pt9p-worker-a", 25c25 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 34c34 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 59,60c59 < "machineType": "a2-highgpu-1g", < "onHostMaintenance": "Terminate", --- > "machineType": "n2-standard-4", Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f ocp_4.17_machineset-a2-highgpu-1g.json Example output machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created Verification View the machine set you created by running the following command: USD oc -n openshift-machine-api get machinesets | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m Note Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. 2.4.11. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. 2.5. Creating a compute machine set on IBM Cloud You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Cloud(R). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.5.1. Sample YAML for a compute machine set custom resource on IBM Cloud This sample YAML defines a compute machine set that runs in a specified IBM Cloud(R) zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The node label to add. 4 6 10 The infrastructure ID, node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud(R) instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 2.5.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.5.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.6. Creating a compute machine set on IBM Power Virtual Server You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Power(R) Virtual Server. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.6.1. Sample YAML for a compute machine set custom resource on IBM Power Virtual Server This sample YAML file defines a compute machine set that runs in a specified IBM Power(R) Virtual Server zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: "0.5" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 The node label to add. 4 6 10 The infrastructure ID, node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID within your region to place machines on. 2.6.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.6.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.7. Creating a compute machine set on Nutanix You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.7.1. Sample YAML for a compute machine set custom resource on Nutanix This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI ( oc ). Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 Specify the node label to add. 3 Specify the infrastructure ID, node label, and zone. 4 Annotations for the cluster autoscaler. 5 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.17. 6 Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 7 Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 8 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 9 Specify the amount of memory for the cluster in Gi. 10 Specify the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 11 Specify the size of the system disk in Gi. 12 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 13 Specify the number of vCPU sockets. 14 Specify the number of vCPUs per socket. 2.7.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.7.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.7.4. Failure domains for Nutanix clusters To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required: Modify the cluster infrastructure custom resource (CR). Modify the cluster control plane machine set CR. Modify or replace the compute machine set CRs. For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content. Additional resources Adding failure domains to an existing Nutanix cluster 2.8. Creating a compute machine set on OpenStack You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.8.1. Sample YAML for a compute machine set custom resource on RHOSP This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone> 1 5 7 13 15 16 17 18 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID and node label. 11 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 12 Required for deployments to multiple networks. To specify multiple networks, add another entry in the networks array. Also, you must include the network that is used as the primarySubnet value. 14 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 2.8.2. Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP If you configured your cluster for single-root I/O virtualization (SR-IOV), you can create compute machine sets that use that technology. This sample YAML defines a compute machine set that uses SR-IOV networks. The nodes that it creates are labeled with node-role.openshift.io/<node_role>: "" In this sample, infrastructure_id is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and node_role is the node label to add. The sample assumes two SR-IOV networks that are named "radio" and "uplink". The networks are used in port definitions in the spec.template.spec.providerSpec.value.ports list. Note Only parameters that are specific to SR-IOV deployments are described in this sample. To review a more general sample, see "Sample YAML for a compute machine set custom resource on RHOSP". An example compute machine set that uses SR-IOV networks apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 5 Enter a network UUID for each port. 2 6 Enter a subnet UUID for each port. 3 7 The value of the vnicType parameter must be direct for each port. 4 8 The value of the portSecurity parameter must be false for each port. You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it. Important After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter: USD oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true" Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. Additional resources Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack 2.8.3. Sample YAML for SR-IOV deployments where port security is disabled To create single-root I/O virtualization (SR-IOV) ports on a network that has port security disabled, define a compute machine set that includes the ports as items in the spec.template.spec.providerSpec.value.ports list. This difference from the standard SR-IOV compute machine set is due to the automatic security group and allowed address pair configuration that occurs for ports that are created by using the network and subnet interfaces. Ports that you define for machines subnets require: Allowed address pairs for the API and ingress virtual IP ports The compute security group Attachment to the machines network and subnet Note Only parameters that are specific to SR-IOV deployments where port security is disabled are described in this sample. To review a more general sample, see Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP". An example compute machine set that uses SR-IOV networks and has port security disabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data 1 Specify allowed address pairs for the API and ingress ports. 2 3 Specify the machines network and subnet. 4 Specify the compute machines security group. Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. 2.8.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.8.5. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.9. Creating a compute machine set on vSphere You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.9.1. Sample YAML for a compute machine set custom resource on vSphere This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 11 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 12 Specify the vCenter data center to deploy the compute machine set on. 13 Specify the vCenter datastore to deploy the compute machine set on. 14 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 15 Specify the vSphere resource pool for your VMs. 16 Specify the vCenter server IP or fully qualified domain name. 2.9.2. Minimum required vCenter privileges for compute machine set management To manage compute machine sets in an OpenShift Container Platform cluster on vCenter, you must use an account with privileges to read, create, and delete the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the minimum required privileges. The following table lists the minimum vCenter roles and privileges that are required to create, scale, and delete compute machine sets and to delete machines in your OpenShift Container Platform cluster. Example 2.1. Minimum vCenter roles and privileges required for compute machine set management vSphere object for role When required Required privileges vSphere vCenter Always InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update 1 StorageProfile.View 1 vSphere vCenter Cluster Always Resource.AssignVMToPool vSphere datastore Always Datastore.AllocateSpace Datastore.Browse vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter data center If the installation program creates the virtual machine folder Resource.AssignVMToPool VirtualMachine.Provisioning.DeployTemplate 1 The StorageProfile.Update and StorageProfile.View permissions are required only for storage backends that use the Container Storage Interface (CSI). The following table details the permissions and propagation settings that are required for compute machine set management. Example 2.2. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always Not required Listed required privileges vSphere vCenter data center Existing folder Not required ReadOnly permission Installation program creates the folder Required Listed required privileges vSphere vCenter Cluster Always Required Listed required privileges vSphere vCenter datastore Always Not required Listed required privileges vSphere Switch Always Not required ReadOnly permission vSphere Port Group Always Not required Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder Required Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. 2.9.3. Requirements for clusters with user-provisioned infrastructure to use compute machine sets To use compute machine sets on clusters that have user-provisioned infrastructure, you must ensure that you cluster configuration supports using the Machine API. Obtaining the infrastructure ID To create compute machine sets, you must be able to supply the infrastructure ID for your cluster. Procedure To obtain the infrastructure ID for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}' Satisfying vSphere credentials requirements To use compute machine sets, the Machine API must be able to interact with vCenter. Credentials that authorize the Machine API components to interact with vCenter must exist in a secret in the openshift-machine-api namespace. Procedure To determine whether the required credentials exist, run the following command: USD oc get secret \ -n openshift-machine-api vsphere-cloud-credentials \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output <vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user> where <vcenter-server> is the IP address or fully qualified domain name (FQDN) of the vCenter server and <openshift-user> and <openshift-user-password> are the OpenShift Container Platform administrator credentials to use. If the secret does not exist, create it by running the following command: USD oc create secret generic vsphere-cloud-credentials \ -n openshift-machine-api \ --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password> Satisfying Ignition configuration requirements Provisioning virtual machines (VMs) requires a valid Ignition configuration. The Ignition configuration contains the machine-config-server address and a system trust bundle for obtaining further Ignition configurations from the Machine Config Operator. By default, this configuration is stored in the worker-user-data secret in the machine-api-operator namespace. Compute machine sets reference the secret during the machine creation process. Procedure To determine whether the required secret exists, run the following command: USD oc get secret \ -n openshift-machine-api worker-user-data \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output disableTemplating: false userData: 1 { "ignition": { ... }, ... } 1 The full output is omitted here, but should have this format. If the secret does not exist, create it by running the following command: USD oc create secret generic worker-user-data \ -n openshift-machine-api \ --from-file=<installation_directory>/worker.ign where <installation_directory> is the directory that was used to store your installation assets during cluster installation. Additional resources Understanding the Machine Config Operator Installing RHCOS and starting the OpenShift Container Platform bootstrap process 2.9.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Note Clusters that are installed with user-provisioned infrastructure have a different networking stack than clusters with infrastructure that is provisioned by the installation program. As a result of this difference, automatic load balancer management is unsupported on clusters that have user-provisioned infrastructure. For these clusters, a compute machine set can only create worker and infra type machines. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Have the necessary permissions to deploy VMs in your vCenter instance and have the required access to the datastore specified. If your cluster uses user-provisioned infrastructure, you have satisfied the specific Machine API requirements for that configuration. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values: Example vSphere providerSpec values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... template: ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" numCPUs: 4 numCoresPerSocket: 4 snapshot: "" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4 1 The name of the secret in the openshift-machine-api namespace that contains the required vCenter credentials. 2 The name of the RHCOS VM template for your cluster that was created during installation. 3 The name of the secret in the openshift-machine-api namespace that contains the required Ignition configuration credentials. 4 The IP address or fully qualified domain name (FQDN) of the vCenter server. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.9.5. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.9.6. Adding tags to machines by using machine sets OpenShift Container Platform adds a cluster-specific tag to each virtual machine (VM) that it creates. The installation program uses these tags to select the VMs to delete when uninstalling a cluster. In addition to the cluster-specific tags assigned to VMs, you can configure a machine set to add up to 10 additional vSphere tags to the VMs it provisions. Prerequisites You have access to an OpenShift Container Platform cluster installed on vSphere using an account with cluster-admin permissions. You have access to the VMware vCenter console associated with your cluster. You have created a tag in the vCenter console. You have installed the OpenShift CLI ( oc ). Procedure Use the vCenter console to find the tag ID for any tag that you want to add to your machines: Log in to the vCenter console. From the Home menu, click Tags & Custom Attributes . Select a tag that you want to add to your machines. Use the browser URL for the tag that you select to identify the tag ID. Example tag URL https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions Example tag ID urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2 # ... 1 Specify a list of up to 10 tags to add to the machines that this machine set provisions. 2 Specify the value of the tag that you want to add to your machines. For example, urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL . 2.10. Creating a compute machine set on bare metal You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on bare metal. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.10.1. Sample YAML for a compute machine set custom resource on bare metal This sample YAML defines a compute machine set that runs on bare metal and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Edit the checksum URL to use the API VIP address. 11 Edit the url URL to use the API VIP address. 2.10.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.10.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6", "providerSpec: value: metadataServiceOptions: authentication: Required 1", "providerSpec: placement: tenancy: dedicated", "providerSpec: value: spotMarketOptions: {}", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.30.3 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.30.3 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.30.3", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h", "oc get machines -n openshift-machine-api | grep worker", "preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h", "oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>", "jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json \"g4dn.xlarge\"", "oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -", "10c10 < \"name\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\", --- > \"name\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\", 21c21 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 31c31 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 60c60 < \"instanceType\": \"g4dn.xlarge\", --- > \"instanceType\": \"m5.xlarge\",", "oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json", "machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created", "oc -n openshift-machine-api get machinesets | grep gpu", "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s", "oc -n openshift-machine-api get machines | grep gpu", "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "providerSpec: value: spotVMOptions: {}", "oc edit machineset <machine-set-name>", "providerSpec: value: osDisk: diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4", "oc create -f <machine-set-config>.yaml", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2", "\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4", "oc create -f <machine-set-name>.yaml", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m", "oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml", "cat machineset-azure.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"0\" machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T14:08:19Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"23601\" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1", "cp machineset-azure.yaml machineset-azure-gpu.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"1\" machine.openshift.io/memoryMb: \"28672\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T20:27:12Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"166285\" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1", "diff machineset-azure.yaml machineset-azure-gpu.yaml", "14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3", "oc create -f machineset-azure-gpu.yaml", "machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m", "oc get nodes", "NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.30.3 myclustername-master-1 Ready control-plane,master 6h41m v1.30.3 myclustername-master-2 Ready control-plane,master 6h39m v1.30.3 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.30.3 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.30.3 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.30.3 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.30.3", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h", "oc create -f machineset-azure-gpu.yaml", "get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h", "oc get machineset -n openshift-machine-api | grep gpu", "myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m", "oc -n openshift-machine-api get machines | grep gpu", "myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3", "providerSpec: value: preemptible: true", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3", "providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5", "machineType: a2-highgpu-1g onHostMaintenance: Terminate", "{ \"apiVersion\": \"machine.openshift.io/v1beta1\", \"kind\": \"MachineSet\", \"metadata\": { \"annotations\": { \"machine.openshift.io/GPU\": \"0\", \"machine.openshift.io/memoryMb\": \"16384\", \"machine.openshift.io/vCPU\": \"4\" }, \"creationTimestamp\": \"2023-01-13T17:11:02Z\", \"generation\": 1, \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\" }, \"name\": \"myclustername-2pt9p-worker-gpu-a\", \"namespace\": \"openshift-machine-api\", \"resourceVersion\": \"20185\", \"uid\": \"2daf4712-733e-4399-b4b4-d43cb1ed32bd\" }, \"spec\": { \"replicas\": 1, \"selector\": { \"matchLabels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"template\": { \"metadata\": { \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machine-role\": \"worker\", \"machine.openshift.io/cluster-api-machine-type\": \"worker\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"spec\": { \"lifecycleHooks\": {}, \"metadata\": {}, \"providerSpec\": { \"value\": { \"apiVersion\": \"machine.openshift.io/v1beta1\", \"canIPForward\": false, \"credentialsSecret\": { \"name\": \"gcp-cloud-credentials\" }, \"deletionProtection\": false, \"disks\": [ { \"autoDelete\": true, \"boot\": true, \"image\": \"projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64\", \"labels\": null, \"sizeGb\": 128, \"type\": \"pd-ssd\" } ], \"kind\": \"GCPMachineProviderSpec\", \"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\", \"metadata\": { \"creationTimestamp\": null }, \"networkInterfaces\": [ { \"network\": \"myclustername-2pt9p-network\", \"subnetwork\": \"myclustername-2pt9p-worker-subnet\" } ], \"preemptible\": true, \"projectID\": \"myteam\", \"region\": \"us-central1\", \"serviceAccounts\": [ { \"email\": \"[email protected]\", \"scopes\": [ \"https://www.googleapis.com/auth/cloud-platform\" ] } ], \"tags\": [ \"myclustername-2pt9p-worker\" ], \"userDataSecret\": { \"name\": \"worker-user-data\" }, \"zone\": \"us-central1-a\" } } } } }, \"status\": { \"availableReplicas\": 1, \"fullyLabeledReplicas\": 1, \"observedGeneration\": 1, \"readyReplicas\": 1, \"replicas\": 1 } }", "oc get nodes", "NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.30.3", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h", "oc get machines -n openshift-machine-api | grep worker", "myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h", "oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>", "jq .spec.template.spec.providerSpec.value.machineType ocp_4.17_machineset-a2-highgpu-1g.json \"a2-highgpu-1g\"", "\"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\",", "oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.17_machineset-a2-highgpu-1g.json -", "15c15 < \"name\": \"myclustername-2pt9p-worker-gpu-a\", --- > \"name\": \"myclustername-2pt9p-worker-a\", 25c25 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 34c34 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 59,60c59 < \"machineType\": \"a2-highgpu-1g\", < \"onHostMaintenance\": \"Terminate\", --- > \"machineType\": \"n2-standard-4\",", "oc create -f ocp_4.17_machineset-a2-highgpu-1g.json", "machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created", "oc -n openshift-machine-api get machinesets | grep gpu", "myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m", "oc -n openshift-machine-api get machines | grep gpu", "myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: \"0.5\" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>", "oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>", "oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>", "oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "disableTemplating: false userData: 1 { \"ignition\": { }, }", "oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions", "urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_management/managing-compute-machines-with-the-machine-api
Chapter 6. View OpenShift Data Foundation Topology
Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_microsoft_azure/viewing-odf-topology_mcg-verify
10.2. About Asynchronous Processes
10.2. About Asynchronous Processes For a typical write operation in Red Hat JBoss Data Grid, the following processes fall on the critical path, ordered from most resource-intensive to the least: Network calls Marshalling Writing to a cache store (optional) Locking In JBoss Data Grid, using asynchronous methods removes network calls and marshalling from the critical path. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/about_asynchronous_processes
5.3. Supported Installation Targets
5.3. Supported Installation Targets An installation target is a storage device that will store Red Hat Enterprise Linux and boot the system. Red Hat Enterprise Linux supports the following installation targets for AMD, Intel, and ARM systems: Storage connected by a standard internal interface, such as SCSI, SATA, or SAS BIOS/firmware RAID devices NVDIMM devices in sector mode on the Intel64 and AMD64 architectures, supported by the nd_pmem driver. Fibre Channel Host Bus Adapters and multipath devices. Some can require vendor-provided drivers. Xen block devices on Intel processors in Xen virtual machines. VirtIO block devices on Intel processors in KVM virtual machines. Red Hat does not support installation to USB drives or SD memory cards. For information about the support for third-party virtualization technologies, see the Red Hat Hardware Compatibility List , available online at https://hardware.redhat.com .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-planning-supported-hardware-x86
Chapter 2. Installing Prerequisite Components
Chapter 2. Installing Prerequisite Components 2.1. Install Open JDK on Red Hat Enterprise Linux Install the OpenJDK package: To install it to alternatives, run these commands: As root, run the alternatives command for java : Select /usr/lib/jvm/jre-[VERSION]-openjdk/bin/java . Then do the same for javac : Select /usr/lib/jvm/java-[VERSION]-openjdk/bin/javac .
[ "install java-[VERSION]-openjdk-devel", "sudo alternatives --install /usr/bin/java java usr/lib/jvm/java-[VERSION]-openjdk/bin/java 1000", "sudo alternatives --install /usr/bin/javac javac /usr/lib/jvm/java-[VERSION]-openjdk/bin/javac 1000", "/usr/sbin/alternatives --config java", "/usr/sbin/alternatives --config javac" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/ch02
Chapter 3. Node Feature Discovery Operator
Chapter 3. Node Feature Discovery Operator Learn about the Node Feature Discovery (NFD) Operator and how you can use it to expose node-level information by orchestrating Node Feature Discovery, a Kubernetes add-on for detecting hardware features and system configuration. The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift Container Platform cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on. The NFD Operator can be found on the Operator Hub by searching for "Node Feature Discovery". 3.1. Installing the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator by using the OpenShift Container Platform CLI or the web console. 3.1.1. Installing the NFD Operator using the CLI As a cluster administrator, you can install the NFD Operator using the CLI. Prerequisites An OpenShift Container Platform cluster Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NFD Operator. Create the following Namespace custom resource (CR) that defines the openshift-nfd namespace, and then save the YAML in the nfd-namespace.yaml file. Set cluster-monitoring to "true" . apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: "true" Create the namespace by running the following command: USD oc create -f nfd-namespace.yaml Install the NFD Operator in the namespace you created in the step by creating the following objects: Create the following OperatorGroup CR and save the YAML in the nfd-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd Create the OperatorGroup CR by running the following command: USD oc create -f nfd-operatorgroup.yaml Create the following Subscription CR and save the YAML in the nfd-sub.yaml file: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: "stable" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription object by running the following command: USD oc create -f nfd-sub.yaml Change to the openshift-nfd project: USD oc project openshift-nfd Verification To verify that the Operator deployment is successful, run: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m A successful deployment shows a Running status. 3.1.2. Installing the NFD Operator using the web console As a cluster administrator, you can install the NFD Operator using the web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Node Feature Discovery from the list of available Operators, and then click Install . On the Install Operator page, select A specific namespace on the cluster , and then click Install . You do not need to create a namespace because it is created for you. Verification To verify that the NFD Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Node Feature Discovery is listed in the openshift-nfd project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting If the Operator does not appear as installed, troubleshoot further: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-nfd project. 3.2. Using the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the Node-Feature-Discovery daemon set by watching for a NodeFeatureDiscovery custom resource (CR). Based on the NodeFeatureDiscovery CR, the Operator creates the operand (NFD) components in the selected namespace. You can edit the CR to use another namespace, image, image pull policy, and nfd-worker-conf config map, among other options. As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift CLI ( oc ) or the web console. Note Starting with version 4.12, the operand.image field in the NodeFeatureDiscovery CR is mandatory. If the NFD Operator is deployed by using Operator Lifecycle Manager (OLM), OLM automatically sets the operand.image field. If you create the NodeFeatureDiscovery CR by using the OpenShift Container Platform CLI or the OpenShift Container Platform web console, you must set the operand.image field explicitly. 3.2.1. Creating a NodeFeatureDiscovery CR by using the CLI As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Note The spec.operand.image setting requires a -rhel9 image to be defined for use with OpenShift Container Platform releases 4.13 and later. The following example shows the use of -rhel9 to acquire the correct image. Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: "" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.17 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] 1 The operand.image field is mandatory. Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check that the NodeFeatureDiscovery CR was created by running the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s A successful deployment shows a Running status. 3.2.2. Creating a NodeFeatureDiscovery CR by using the CLI in a disconnected environment As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. You have access to a mirror registry with the required images. You installed the skopeo CLI tool. Procedure Determine the digest of the registry image: Run the following command: USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version> Example command USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12 Inspect the output to identify the image digest: Example output { ... "Digest": "sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", ... } Use the skopeo CLI tool to copy the image from registry.redhat.io to your mirror registry, by running the following command: skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> Example command skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] 1 The operand.image field is mandatory. Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check the status of the NodeFeatureDiscovery CR by running the following command: USD oc get nodefeaturediscovery nfd-instance -o yaml Check that the pods are running without ImagePullBackOff errors by running the following command: USD oc get pods -n <nfd_namespace> 3.2.3. Creating a NodeFeatureDiscovery CR by using the web console As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Navigate to the Operators Installed Operators page. In the Node Feature Discovery section, under Provided APIs , click Create instance . Edit the values of the NodeFeatureDiscovery CR. Click Create . Note Starting with version 4.12, the operand.image field in the NodeFeatureDiscovery CR is mandatory. If the NFD Operator is deployed by using Operator Lifecycle Manager (OLM), OLM automatically sets the operand.image field. If you create the NodeFeatureDiscovery CR by using the OpenShift Container Platform CLI or the OpenShift Container Platform web console, you must set the operand.image field explicitly. 3.3. Configuring the Node Feature Discovery Operator 3.3.1. core The core section contains common configuration settings that are not specific to any particular feature source. core.sleepInterval core.sleepInterval specifies the interval between consecutive passes of feature detection or re-detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval; no re-detection or re-labeling is done. This value is overridden by the deprecated --sleep-interval command line flag, if specified. Example usage core: sleepInterval: 60s 1 The default value is 60s . core.sources core.sources specifies the list of enabled feature sources. A special value all enables all feature sources. This value is overridden by the deprecated --sources command line flag, if specified. Default: [all] Example usage core: sources: - system - custom core.labelWhiteList core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published. The regular expression is only matched against the basename part of the label, the part of the name after '/'. The label prefix, or namespace, is omitted. This value is overridden by the deprecated --label-whitelist command line flag, if specified. Default: null Example usage core: labelWhiteList: '^cpu-cpuid' core.noPublish Setting core.noPublish to true disables all communication with the nfd-master . It is effectively a dry run flag; nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master . This value is overridden by the --no-publish command line flag, if specified. Example: Example usage core: noPublish: true 1 The default value is false . core.klog The following options specify the logger configuration, most of which can be dynamically adjusted at run-time. The logger options can also be specified using command line flags, which take precedence over any corresponding config file options. core.klog.addDirHeader If set to true , core.klog.addDirHeader adds the file directory to the header of the log messages. Default: false Run-time configurable: yes core.klog.alsologtostderr Log to standard error as well as files. Default: false Run-time configurable: yes core.klog.logBacktraceAt When logging hits line file:N, emit a stack trace. Default: empty Run-time configurable: yes core.klog.logDir If non-empty, write log files in this directory. Default: empty Run-time configurable: no core.klog.logFile If not empty, use this log file. Default: empty Run-time configurable: no core.klog.logFileMaxSize core.klog.logFileMaxSize defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0 , the maximum file size is unlimited. Default: 1800 Run-time configurable: no core.klog.logtostderr Log to standard error instead of files Default: true Run-time configurable: yes core.klog.skipHeaders If core.klog.skipHeaders is set to true , avoid header prefixes in the log messages. Default: false Run-time configurable: yes core.klog.skipLogHeaders If core.klog.skipLogHeaders is set to true , avoid headers when opening log files. Default: false Run-time configurable: no core.klog.stderrthreshold Logs at or above this threshold go to stderr. Default: 2 Run-time configurable: yes core.klog.v core.klog.v is the number for the log level verbosity. Default: 0 Run-time configurable: yes core.klog.vmodule core.klog.vmodule is a comma-separated list of pattern=N settings for file-filtered logging. Default: empty Run-time configurable: yes 3.3.2. sources The sources section contains feature source specific configuration parameters. sources.cpu.cpuid.attributeBlacklist Prevent publishing cpuid features listed in this option. This value is overridden by sources.cpu.cpuid.attributeWhitelist , if specified. Default: [BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SGXLC, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSSE3] Example usage sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT] sources.cpu.cpuid.attributeWhitelist Only publish the cpuid features listed in this option. sources.cpu.cpuid.attributeWhitelist takes precedence over sources.cpu.cpuid.attributeBlacklist . Default: empty Example usage sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL] sources.kernel.kconfigFile sources.kernel.kconfigFile is the path of the kernel config file. If empty, NFD runs a search in the well-known standard locations. Default: empty Example usage sources: kernel: kconfigFile: "/path/to/kconfig" sources.kernel.configOpts sources.kernel.configOpts represents kernel configuration options to publish as feature labels. Default: [NO_HZ, NO_HZ_IDLE, NO_HZ_FULL, PREEMPT] Example usage sources: kernel: configOpts: [NO_HZ, X86, DMI] sources.pci.deviceClassWhitelist sources.pci.deviceClassWhitelist is a list of PCI device class IDs for which to publish a label. It can be specified as a main class only (for example, 03 ) or full class-subclass combination (for example 0300 ). The former implies that all subclasses are accepted. The format of the labels can be further configured with deviceLabelFields . Default: ["03", "0b40", "12"] Example usage sources: pci: deviceClassWhitelist: ["0200", "03"] sources.pci.deviceLabelFields sources.pci.deviceLabelFields is the set of PCI ID fields to use when constructing the name of the feature label. Valid fields are class , vendor , device , subsystem_vendor and subsystem_device . Default: [class, vendor] Example usage sources: pci: deviceLabelFields: [class, vendor, device] With the example config above, NFD would publish labels such as feature.node.kubernetes.io/pci-<class-id>_<vendor-id>_<device-id>.present=true sources.usb.deviceClassWhitelist sources.usb.deviceClassWhitelist is a list of USB device class IDs for which to publish a feature label. The format of the labels can be further configured with deviceLabelFields . Default: ["0e", "ef", "fe", "ff"] Example usage sources: usb: deviceClassWhitelist: ["ef", "ff"] sources.usb.deviceLabelFields sources.usb.deviceLabelFields is the set of USB ID fields from which to compose the name of the feature label. Valid fields are class , vendor , and device . Default: [class, vendor, device] Example usage sources: pci: deviceLabelFields: [class, vendor] With the example config above, NFD would publish labels like: feature.node.kubernetes.io/usb-<class-id>_<vendor-id>.present=true . sources.custom sources.custom is the list of rules to process in the custom feature source to create user-specific labels. Default: empty Example usage source: custom: - name: "my.custom.feature" matchOn: - loadedKMod: ["e1000e"] - pciId: class: ["0200"] vendor: ["8086"] 3.4. About the NodeFeatureRule custom resource NodeFeatureRule objects are a NodeFeatureDiscovery custom resource designed for rule-based custom labeling of nodes. Some use cases include application-specific labeling or distribution by hardware vendors to create specific labels for their devices. NodeFeatureRule objects provide a method to create vendor- or application-specific labels and taints. It uses a flexible rule-based mechanism for creating labels and optionally taints based on node features. 3.5. Using the NodeFeatureRule custom resource Create a NodeFeatureRule object to label nodes if a set of rules match the conditions. Procedure Create a custom resource file named nodefeaturerule.yaml that contains the following text: apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: "example rule" labels: "example-custom-feature": "true" # Label is created if all of the rules below match matchFeatures: # Match if "veth" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: ["8086"]} This custom resource specifies that labelling occurs when the veth module is loaded and any PCI device with vendor code 8086 exists in the cluster. Apply the nodefeaturerule.yaml file to your cluster by running the following command: USD oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml The example applies the feature label on nodes with the veth module loaded and any PCI device with vendor code 8086 exists. Note A relabeling delay of up to 1 minute might occur. 3.6. Using the NFD Topology Updater The Node Feature Discovery (NFD) Topology Updater is a daemon responsible for examining allocated resources on a worker node. It accounts for resources that are available to be allocated to new pod on a per-zone basis, where a zone can be a Non-Uniform Memory Access (NUMA) node. The NFD Topology Updater communicates the information to nfd-master, which creates a NodeResourceTopology custom resource (CR) corresponding to all of the worker nodes in the cluster. One instance of the NFD Topology Updater runs on each node of the cluster. To enable the Topology Updater workers in NFD, set the topologyupdater variable to true in the NodeFeatureDiscovery CR, as described in the section Using the Node Feature Discovery Operator . 3.6.1. NodeResourceTopology CR When run with NFD Topology Updater, NFD creates custom resource instances corresponding to the node resource hardware topology, such as: apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: ["SingleNUMANodeContainerLevel"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 3.6.2. NFD Topology Updater command line flags To view available command line flags, run the nfd-topology-updater -help command. For example, in a podman container, run the following command: USD podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help -ca-file The -ca-file flag is one of the three flags, together with the -cert-file and `-key-file`flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS root certificate that is used for verifying the authenticity of nfd-master. Default: empty Important The -ca-file flag must be specified together with the -cert-file and -key-file flags. Example USD nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -cert-file The -cert-file flag is one of the three flags, together with the -ca-file and -key-file flags , that controls mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS certificate presented for authenticating outgoing requests. Default: empty Important The -cert-file flag must be specified together with the -ca-file and -key-file flags. Example USD nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt -h, -help Print usage and exit. -key-file The -key-file flag is one of the three flags, together with the -ca-file and -cert-file flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the private key corresponding the given certificate file, or -cert-file , that is used for authenticating outgoing requests. Default: empty Important The -key-file flag must be specified together with the -ca-file and -cert-file flags. Example USD nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt -kubelet-config-file The -kubelet-config-file specifies the path to the Kubelet's configuration file. Default: /host-var/lib/kubelet/config.yaml Example USD nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml -no-publish The -no-publish flag disables all communication with the nfd-master, making it a dry run flag for nfd-topology-updater. NFD Topology Updater runs resource hardware topology detection normally, but no CR requests are sent to nfd-master. Default: false Example USD nfd-topology-updater -no-publish 3.6.2.1. -oneshot The -oneshot flag causes the NFD Topology Updater to exit after one pass of resource hardware topology detection. Default: false Example USD nfd-topology-updater -oneshot -no-publish -podresources-socket The -podresources-socket flag specifies the path to the Unix socket where kubelet exports a gRPC service to enable discovery of in-use CPUs and devices, and to provide metadata for them. Default: /host-var/liblib/kubelet/pod-resources/kubelet.sock Example USD nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock -server The -server flag specifies the address of the nfd-master endpoint to connect to. Default: localhost:8080 Example USD nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443 -server-name-override The -server-name-override flag specifies the common name (CN) which to expect from the nfd-master TLS certificate. This flag is mostly intended for development and debugging purposes. Default: empty Example USD nfd-topology-updater -server-name-override=localhost -sleep-interval The -sleep-interval flag specifies the interval between resource hardware topology re-examination and custom resource updates. A non-positive value implies infinite sleep interval and no re-detection is done. Default: 60s Example USD nfd-topology-updater -sleep-interval=1h -version Print version and exit. -watch-namespace The -watch-namespace flag specifies the namespace to ensure that resource hardware topology examination only happens for the pods running in the specified namespace. Pods that are not running in the specified namespace are not considered during resource accounting. This is particularly useful for testing and debugging purposes. A * value means that all of the pods across all namespaces are considered during the accounting process. Default: * Example USD nfd-topology-updater -watch-namespace=rte
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"", "oc create -f nfd-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd", "oc create -f nfd-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f nfd-sub.yaml", "oc project openshift-nfd", "oc get pods", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.17 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]", "oc apply -f <filename>", "oc get pods", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s", "skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>", "skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12", "{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }", "skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>", "skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]", "oc apply -f <filename>", "oc get nodefeaturediscovery nfd-instance -o yaml", "oc get pods -n <nfd_namespace>", "core: sleepInterval: 60s 1", "core: sources: - system - custom", "core: labelWhiteList: '^cpu-cpuid'", "core: noPublish: true 1", "sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]", "sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]", "sources: kernel: kconfigFile: \"/path/to/kconfig\"", "sources: kernel: configOpts: [NO_HZ, X86, DMI]", "sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]", "sources: pci: deviceLabelFields: [class, vendor, device]", "sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]", "sources: pci: deviceLabelFields: [class, vendor]", "source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}", "oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml", "apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3", "podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help", "nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key", "nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt", "nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt", "nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml", "nfd-topology-updater -no-publish", "nfd-topology-updater -oneshot -no-publish", "nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock", "nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443", "nfd-topology-updater -server-name-override=localhost", "nfd-topology-updater -sleep-interval=1h", "nfd-topology-updater -watch-namespace=rte" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator
Chapter 6. Desktop
Chapter 6. Desktop LibreOffice rebased to version 4.3.7.2 The libreoffice packages have been upgraded to upstream version 4.3.7.2, which provides a number of bug fixes and enhancements over the version, including: The possibility to print comments in page margin has been added. Support for nested comments has been added. OpenXML interoperability has been improved. Accessibility support has been enhanced. The color picker has been improved. The start center has been improved. Initial HiDPI support has been added. The limitation on number of characters in a paragraph has been raised considerably. For a complete list of bug fixes and enhancements provided by this upgrade, refer to https://wiki.documentfoundation.org/ReleaseNotes/4.3. (BZ# 1258467 ) mesa now supports additional Intel 3D graphics The mesa package now supports integrated 3D graphics on 6th generation Intel Core processors, Intel Xeon processor E3 v5, and current Intel Pentium and Intel Celeron-branded processors. (BZ#1135362) New Vinagre features This update provides a number of features to Vinagre. Namely: The ability to connect through RDP protocol to remote Windows machines has been added. If requested, credentials can be stored in a keyring for RDP connections. Minimize button has been added to the fullscreen toolbar so that users do not need to leave fullscreen mode to minimize the whole window. In addition, the /apps/vinagre/plugins/active-plugins GConf key is now ignored as it could cause RDP not to be loaded. (BZ#1215093) vmwgfx now supports 3D operations under VMware Workstation 10 The vmwgfx driver has been updated to version 4.4, which enables vmwgfx support for 3D operations under VMware Workstation 10. With this upgrade, the vmwgfx driver now allows virtualized Red Hat Enterprise Linux 6 system to work as intended on Windows workstations. (BZ#1164447) x3270 rebased to version 3.3.15 The latest update of x3270 in Red Hat Enterprise Linux 6.8 adds support for oversize, dynamic screen resolutions, that is screen adjustment on window resizing, to the IBM 3270 terminal emulator for the X Window System. Viewing larger screen sizes thus works properly and larger files or outputs on the mainframe appear as expected. (BZ#1171849) icedtea-web rebased to version 1.6.2 The icedtea-web packages have been upgraded to upstream version 1.6.2, which provides a number of bug fixes and enhancements over the version. Notable changes include the following: The IcedTea-Web documentation and man pages have been significantly expanded. IcedTea-Web now supports bash completion. The Custom Policies and Run in Sandbox features have been enhanced. An -html switch has been implemented for the Java Web Start (JavaWS) framework, which can serve as a replacement of the AppletViewer program. It is now possible to use IcedTea-Web to create desktop and menu launchers for applets and JavaWS applications. (BZ#1275523)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_desktop
Chapter 8. LVM Administration with the LVM GUI
Chapter 8. LVM Administration with the LVM GUI In addition to the Command Line Interface (CLI), LVM provides a Graphical User Interface (GUI) which you can use to configure LVM logical volumes. You can open this utility by typing system-config-lvm . The LVM chapter of the Storage Administration Guide provides step-by-step instructions for configuring an LVM logical volume using this utility.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_gui
Chapter 36. Analyzing system performance with BPF Compiler Collection
Chapter 36. Analyzing system performance with BPF Compiler Collection As a system administrator, you can use the BPF Compiler Collection (BCC) library to create tools for analyzing the performance of your Linux operating system and gathering information, which could be difficult to obtain through other interfaces. 36.1. Installing the bcc-tools package Install the bcc-tools package, which also installs the BPF Compiler Collection (BCC) library as a dependency. Procedure Install bcc-tools . The BCC tools are installed in the /usr/share/bcc/tools/ directory. Verification Inspect the installed tools: The doc directory in the listing provides documentation for each tool. 36.2. Using selected bcc-tools for performance analyses Use certain pre-created programs from the BPF Compiler Collection (BCC) library to efficiently and securely analyze the system performance on the per-event basis. The set of pre-created programs in the BCC library can serve as examples for creation of additional programs. Prerequisites Installed bcc-tools package Root permissions Procedure Using execsnoop to examine the system processes Run the execsnoop program in one terminal: To create a short-lived process of the ls command, in another terminal, enter: The terminal running execsnoop shows the output similar to the following: The execsnoop program prints a line of output for each new process that consume system resources. It even detects processes of programs that run very shortly, such as ls , and most monitoring tools would not register them. The execsnoop output displays the following fields: PCOMM The parent process name. ( ls ) PID The process ID. ( 8382 ) PPID The parent process ID. ( 8287 ) RET The return value of the exec() system call ( 0 ), which loads program code into new processes. ARGS The location of the started program with arguments. To see more details, examples, and options for execsnoop , see /usr/share/bcc/tools/doc/execsnoop_example.txt file. For more information about exec() , see exec(3) manual pages. Using opensnoop to track what files a command opens In one terminal, run the opensnoop program to print the output for files opened only by the process of the uname command: In another terminal, enter the command to open certain files: The terminal running opensnoop shows the output similar to the following: The opensnoop program watches the open() system call across the whole system, and prints a line of output for each file that uname tried to open along the way. The opensnoop output displays the following fields: PID The process ID. ( 8596 ) COMM The process name. ( uname ) FD The file descriptor - a value that open() returns to refer to the open file. ( 3 ) ERR Any errors. PATH The location of files that open() tried to open. If a command tries to read a non-existent file, then the FD column returns -1 and the ERR column prints a value corresponding to the relevant error. As a result, opensnoop can help you identify an application that does not behave properly. To see more details, examples, and options for opensnoop , see /usr/share/bcc/tools/doc/opensnoop_example.txt file. For more information about open() , see open(2) manual pages. Use the biotop to monitor the top processes performing I/O operations on the disk Run the biotop program in one terminal with argument 30 to produce 30 second summary: Note When no argument provided, the output screen by default refreshes every 1 second. In another terminal, enter command to read the content from the local hard disk device and write the output to the /dev/zero file: This step generates certain I/O traffic to illustrate biotop . The terminal running biotop shows the output similar to the following: The biotop output displays the following fields: PID The process ID. ( 9568 ) COMM The process name. ( dd ) DISK The disk performing the read operations. ( vda ) I/O The number of read operations performed. (16294) Kbytes The amount of Kbytes reached by the read operations. (14,440,636) AVGms The average I/O time of read operations. (3.69) For more details, examples, and options for biotop , see the /usr/share/bcc/tools/doc/biotop_example.txt file. For more information about dd , see dd(1) manual pages. Using xfsslower to expose unexpectedly slow file system operations The xfsslower measures the time spent by XFS file system in performing read, write, open or sync ( fsync ) operations. The 1 argument ensures that the program shows only the operations that are slower than 1 ms. Run the xfsslower program in one terminal: Note When no arguments provided, xfsslower by default displays operations slower than 10 ms. In another terminal, enter the command to create a text file in the vim editor to start interaction with the XFS file system: The terminal running xfsslower shows something similar upon saving the file from the step: Each line represents an operation in the file system, which took more time than a certain threshold. xfsslower detects possible file system problems, which can take form of unexpectedly slow operations. The xfsslower output displays the following fields: COMM The process name. ( b'bash' ) T The operation type. ( R ) R ead W rite S ync OFF_KB The file offset in KB. (0) FILENAME The file that is read, written, or synced. To see more details, examples, and options for xfsslower , see /usr/share/bcc/tools/doc/xfsslower_example.txt file. For more information about fsync , see fsync(2) manual pages.
[ "dnf install bcc-tools", "ls -l /usr/share/bcc/tools/ -rwxr-xr-x. 1 root root 4198 Dec 14 17:53 dcsnoop -rwxr-xr-x. 1 root root 3931 Dec 14 17:53 dcstat -rwxr-xr-x. 1 root root 20040 Dec 14 17:53 deadlock_detector -rw-r--r--. 1 root root 7105 Dec 14 17:53 deadlock_detector.c drwxr-xr-x. 3 root root 8192 Mar 11 10:28 doc -rwxr-xr-x. 1 root root 7588 Dec 14 17:53 execsnoop -rwxr-xr-x. 1 root root 6373 Dec 14 17:53 ext4dist -rwxr-xr-x. 1 root root 10401 Dec 14 17:53 ext4slower", "/usr/share/bcc/tools/execsnoop", "ls /usr/share/bcc/tools/doc/", "PCOMM PID PPID RET ARGS ls 8382 8287 0 /usr/bin/ls --color=auto /usr/share/bcc/tools/doc/", "/usr/share/bcc/tools/opensnoop -n uname", "uname", "PID COMM FD ERR PATH 8596 uname 3 0 /etc/ld.so.cache 8596 uname 3 0 /lib64/libc.so.6 8596 uname 3 0 /usr/lib/locale/locale-archive", "/usr/share/bcc/tools/biotop 30", "dd if=/dev/vda of=/dev/zero", "PID COMM D MAJ MIN DISK I/O Kbytes AVGms 9568 dd R 252 0 vda 16294 14440636.0 3.69 48 kswapd0 W 252 0 vda 1763 120696.0 1.65 7571 gnome-shell R 252 0 vda 834 83612.0 0.33 1891 gnome-shell R 252 0 vda 1379 19792.0 0.15 7515 Xorg R 252 0 vda 280 9940.0 0.28 7579 llvmpipe-1 R 252 0 vda 228 6928.0 0.19 9515 gnome-control-c R 252 0 vda 62 6444.0 0.43 8112 gnome-terminal- R 252 0 vda 67 2572.0 1.54 7807 gnome-software R 252 0 vda 31 2336.0 0.73 9578 awk R 252 0 vda 17 2228.0 0.66 7578 llvmpipe-0 R 252 0 vda 156 2204.0 0.07 9581 pgrep R 252 0 vda 58 1748.0 0.42 7531 InputThread R 252 0 vda 30 1200.0 0.48 7504 gdbus R 252 0 vda 3 1164.0 0.30 1983 llvmpipe-1 R 252 0 vda 39 724.0 0.08 1982 llvmpipe-0 R 252 0 vda 36 652.0 0.06", "/usr/share/bcc/tools/xfsslower 1", "vim text", "TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME 13:07:14 b'bash' 4754 R 256 0 7.11 b'vim' 13:07:14 b'vim' 4754 R 832 0 4.03 b'libgpm.so.2.1.0' 13:07:14 b'vim' 4754 R 32 20 1.04 b'libgpm.so.2.1.0' 13:07:14 b'vim' 4754 R 1982 0 2.30 b'vimrc' 13:07:14 b'vim' 4754 R 1393 0 2.52 b'getscriptPlugin.vim' 13:07:45 b'vim' 4754 S 0 0 6.71 b'text' 13:07:45 b'pool' 2588 R 16 0 5.58 b'text'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/analyzing-system-performance-with-bpf-compiler_collection_monitoring-and-managing-system-status-and-performance
Chapter 3. Migrating data between cache stores
Chapter 3. Migrating data between cache stores Data Grid provides a Java utility for migrating persistent data between cache stores. In the case of upgrading Data Grid, functional differences between major versions do not allow backwards compatibility between cache stores. You can use StoreMigrator to convert your data so that it is compatible with the target version. For example, upgrading to Data Grid 8.0 changes the default marshaller to Protostream. In Data Grid versions, cache stores use a binary format that is not compatible with the changes to marshalling. This means that Data Grid 8.0 cannot read from cache stores with Data Grid versions. In other cases Data Grid versions deprecate or remove cache store implementations, such as JDBC Mixed and Binary stores. You can use StoreMigrator in these cases to convert to different cache store implementations. 3.1. Cache store migrator Data Grid provides the StoreMigrator.java utility that recreates data for the latest Data Grid cache store implementations. StoreMigrator takes a cache store from a version of Data Grid as source and uses a cache store implementation as target. When you run StoreMigrator , it creates the target cache with the cache store type that you define using the EmbeddedCacheManager interface. StoreMigrator then loads entries from the source store into memory and then puts them into the target cache. StoreMigrator also lets you migrate data from one type of cache store to another. For example, you can migrate from a JDBC string-based cache store to a RocksDB cache store. Important StoreMigrator cannot migrate data from segmented cache stores to: Non-segmented cache store. Segmented cache stores that have a different number of segments. 3.2. Configuring the cache store migrator Use the migrator.properties file to configure properties for source and target cache stores. Procedure Create a migrator.properties file. Configure properties for source and target cache store using the migrator.properties file. Add the source. prefix to all configuration properties for the source cache store. Example source cache store Important For migrating data from segmented cache stores, you must also configure the number of segments using the source.segment_count property. The number of segments must match clustering.hash.numSegments in your Data Grid configuration. If the number of segments for a cache store does not match the number of segments for the corresponding cache, Data Grid cannot read data from the cache store. Add the target. prefix to all configuration properties for the target cache store. Example target cache store 3.2.1. Configuration properties for the cache store migrator Configure source and target cache stores in a StoreMigrator properties. Table 3.1. Cache Store Type Property Property Description Required/Optional type Specifies the type of cache store for a source or target cache store. .type=JDBC_STRING .type=JDBC_BINARY .type=JDBC_MIXED .type=LEVELDB .type=ROCKSDB .type=SINGLE_FILE_STORE .type=SOFT_INDEX_FILE_STORE .type=JDBC_MIXED Required Table 3.2. Common Properties Property Description Example Value Required/Optional cache_name The name of the cache that you want to back up. .cache_name=myCache Required segment_count The number of segments for target cache stores that can use segmentation. The number of segments must match clustering.hash.numSegments in the Data Grid configuration. If the number of segments for a cache store does not match the number of segments for the corresponding cache, Data Grid cannot read data from the cache store. .segment_count=256 Optional Table 3.3. JDBC Properties Property Description Required/Optional dialect Specifies the dialect of the underlying database. Required version Specifies the marshaller version for source cache stores. Set one of the following values: * 8 for Data Grid 7.2.x * 9 for Data Grid 7.3.x * 10 for Data Grid 8.0.x * 11 for Data Grid 8.1.x * 12 for Data Grid 8.2.x * 13 for Data Grid 8.3.x Required for source stores only. marshaller.class Specifies a custom marshaller class. Required if using custom marshallers. marshaller.externalizers Specifies a comma-separated list of custom AdvancedExternalizer implementations to load in this format: [id]:<Externalizer class> Optional connection_pool.connection_url Specifies the JDBC connection URL. Required connection_pool.driver_class Specifies the class of the JDBC driver. Required connection_pool.username Specifies a database username. Required connection_pool.password Specifies a password for the database username. Required db.disable_upsert Disables database upsert. Optional db.disable_indexing Specifies if table indexes are created. Optional table.string.table_name_prefix Specifies additional prefixes for the table name. Optional table.string.<id|data|timestamp>.name Specifies the column name. Required table.string.<id|data|timestamp>.type Specifies the column type. Required key_to_string_mapper Specifies the TwoWayKey2StringMapper class. Optional Note To migrate from Binary cache stores in older Data Grid versions, change table.string.* to table.binary.\* in the following properties: source.table.binary.table_name_prefix source.table.binary.<id\|data\|timestamp>.name source.table.binary.<id\|data\|timestamp>.type Table 3.4. RocksDB Properties Property Description Required/Optional location Sets the database directory. Required compression Specifies the compression type to use. Optional Table 3.5. SingleFileStore Properties Property Description Required/Optional location Sets the directory that contains the cache store .dat file. Required Table 3.6. SoftIndexFileStore Properties Property Description Value Required/Optional location Sets the database directory. Required index_location Sets the database index directory. 3.3. Migrating Data Grid cache stores You can use the StoreMigrator to migrate data between cache stores with different Data Grid versions or to migrate data from one type of cache store to another. Prerequisites Have a infinispan-tools.jar . Have the source and target cache store configured in the migrator.properties file. Procedure If you built the infinispan-tools.jar from the source code, do the following: Add infinispan-tools.jar to your classpath. Add dependencies for your source and target databases, such as JDBC drivers to your classpath. Specify migrator.properties file as an argument for StoreMigrator . If you pulled infinispan-tools.jar from the Maven repository, run the following command: mvn exec:java
[ "source.type=SOFT_INDEX_FILE_STORE source.cache_name=myCache source.location=/path/to/source/sifs source.version=<version>", "target.type=SINGLE_FILE_STORE target.cache_name=myCache target.location=/path/to/target/sfs.dat", "Example configuration for migrating to a JDBC String-Based cache store target.type=STRING target.cache_name=myCache target.dialect=POSTGRES target.marshaller.class=org.example.CustomMarshaller target.marshaller.externalizers=25:Externalizer1,org.example.Externalizer2 target.connection_pool.connection_url=jdbc:postgresql:postgres target.connection_pool.driver_class=org.postrgesql.Driver target.connection_pool.username=postgres target.connection_pool.password=redhat target.db.disable_upsert=false target.db.disable_indexing=false target.table.string.table_name_prefix=tablePrefix target.table.string.id.name=id_column target.table.string.data.name=datum_column target.table.string.timestamp.name=timestamp_column target.table.string.id.type=VARCHAR target.table.string.data.type=bytea target.table.string.timestamp.type=BIGINT target.key_to_string_mapper=org.infinispan.persistence.keymappers. DefaultTwoWayKey2StringMapper", "Example configuration for migrating from a RocksDB cache store. source.type=ROCKSDB source.cache_name=myCache source.location=/path/to/rocksdb/database source.compression=SNAPPY", "Example configuration for migrating to a Single File cache store. target.type=SINGLE_FILE_STORE target.cache_name=myCache target.location=/path/to/sfs.dat", "Example configuration for migrating to a Soft-Index File cache store. target.type=SOFT_INDEX_FILE_STORE target.cache_name=myCache target.location=path/to/sifs/database target.location=path/to/sifs/index", "mvn exec:java" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/upgrading_data_grid/migrating-data-between-stores
Part I. Getting started with Red Hat build of Kogito microservices
Part I. Getting started with Red Hat build of Kogito microservices As a developer of business decisions, you can use Red Hat build of Kogito business automation to develop decision services using Decision Model and Notation (DMN) models, Drools Rule Language (DRL) rules, Predictive Model Markup Language (PMML) or a combination of all three methods. Prerequisites JDK 11 or later is installed. Apache Maven 3.6.2 or later is installed.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/assembly-getting-started-kogito-microservices
Chapter 9. Creating and uploading a customized RHEL VMDK system image to vSphere
Chapter 9. Creating and uploading a customized RHEL VMDK system image to vSphere You can create customized RHEL system images by using Insights image builder and upload those images to the VMware vSphere client. 9.1. Creating a customized RHEL VMDK system image by using Insights image builder With Insights image builder, you can create customized system images in the Open virtualization format ( .ova ) or in the Virtual disk ( .vmdk ) format. You can upload these images to VMware vSphere. You can import the Virtual disk ( .vmdk ) format only with the govc client. As for the Open virtualization format ( .ova ), you can import it by using both the vSphere GUI and govc clients. The Open virtualization format ( .ova ) is a .vmdk image with additional metadata about the virtual hardware, when imported it creates a VM. After importing the .ova image into vSphere, you can configure the VM with any additional hardware, such as network, disks and CD-ROM. Procedure Access Insights image builder on the browser. The Insights image builder dashboard appears. Click Create image . The Create image dialog wizard opens. On the Image output page, complete the following steps: From the Release list, select the Release that you want to use: for example, choose Red Hat Enterprise Linux (RHEL). From the Select target environments option, select VMware . Select one of the options: Open virtualization format ( .ova ) Virtual disk ( .vmdk ) format Click . On the Registration page, select the type of registration that you want to use. You can select from these options: Register images with Red Hat - Register and connect image instances, subscriptions and insights with Red Hat. For details on how to embed an activation key and register systems on first boot, see Creating a customized system image with an embed subscription by using Insights image builder . Register image instances only - Register and connect only image instances and subscriptions with Red Hat. Register later - Register the system after the image creation. Click . Optional: On the Packages page, add packages to your image. See Adding packages during image creation by using Insights image builder . On the Name image page, enter a name for your image and click . If you do not enter a name, you can find the image you created by its UUID. On the Review page, review the details about the image creation and click Create image . After you complete the steps in the Create image wizard, the image builder dashboard is displayed. When the new image displays a Ready status in the Status column, click Download .vmdk in the Instance column. The .vmdk image is saved to your system and is ready for deployment. Note The .vmdk images are available for 6 hours and expire after that. Ensure that you download the image to avoid losing it. Additional resource Creating a new image from an existing build 9.2. Deploying VMDK images to vSphere by using the GUI After creating your Open virtualization format ( .ova ) image, you can deploy it to VMware vSphere by using the vSphere GUI client. It will create a VM which can be customized further before booting. Note The GUI wizard does not support cloud-init . Prerequisite You logged in to the vSphere UI in a browser. You downloaded your ( .ova ) image. Procedure In the vSphere Client, from the Actions menu, select Deploy OVF Template . On the Deploy OVF Template page, complete the settings for each configuration option and click . Click Finish . The .ova image starts to be deployed. After the image deployment is complete, you have a new virtual machine (VM) from the .ova image. In the deployed image page, perform the following steps: From the Actions menu, select Edit Setting . On the Virtual Hardware tab, configure resources such as CPU, memory, add a new network adapter, between others of your choice. On the CD/DVD drive 1 option, attach a CD or DVD Drive that contains a cloud-init.iso , to provision a user on startup. The VM is now ready to boot with the username and password from the cloud-init.iso file. Additional resources Deploy an OVF or OVA Template The govc documentation The VMware - cloud init 22.2 documentation 9.3. Deploying VMDK images to vSphere by using the CLI After creating your image, you can deploy it to VMware vSphere by using the CLI. Then, you can create a VM and login into it. Note The GUI wizard does not support cloud-init . Prerequisites You configured the govc VMware CLI tool client. To use the govc VMware CLI tool client, you must set the following values in the environment: Procedure Access the directory where you downloaded your .vmdk image. Create a file named metadata.yaml . Add the following information to this file: Create a file named userdata.yaml . Add the following information to the file: ssh_authorized_keys is your SSH public key. You can find your SSH public key in ~/.ssh/id_rsa.pub . Export the metadata.yaml and userdata.yaml files to the environment, compressed with gzip , encoded in base64 as follows. They will be used in further steps. Launch the image on vSphere with the metadata.yaml and userdata.yaml files: Import the .vmdk image in to vSphere: Create the VM in vSphere without powering it on: Change the VM to add ExtraConfig variables, the cloud-init config: Power-on the VM: Retrieve the VM IP address: Use SSH to log in to the VM, using the user-data specified in cloud-init file configuration: Additional resources The govc documentation The VMware - cloud init 22.2 documentation
[ "GOVC_URL GOVC_DATACENTER GOVC_FOLDER GOVC_DATASTORE GOVC_RESOURCE_POOL GOVC_NETWORK", "instance-id: cloud-vm local-hostname: vmname", "#cloud-config users: - name: admin sudo: \"ALL=(ALL) NOPASSWD:ALL\" ssh_authorized_keys: - ssh-rsa AAA...fhHQ== [email protected]", "export METADATA=USD(gzip -c9 <metadata.yaml | { base64 -w0 2>/dev/null || base64; }) USERDATA=USD(gzip -c9 <userdata.yaml | { base64 -w0 2>/dev/null || base64; })", "govc import.vmdk ./composer-api.vmdk foldername", "govc vm.create -net.adapter=vmxnet3 -m=4096 -c=2 -g=rhel8_64Guest -firmware=bios -disk=\" foldername /composer-api.vmdk\" -disk.controller=ide -on=false vmname", "govc vm.change -vm vmname -e guestinfo.metadata=\"USD{METADATA}\" -e guestinfo.metadata.encoding=\"gzip+base64\" -e guestinfo.userdata=\"USD{USERDATA}\" -e guestinfo.userdata.encoding=\"gzip+base64\"", "govc vm.power -on vmname", "HOST=USD(govc vm.ip vmname )", "ssh admin@HOST" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/creating_customized_images_by_using_insights_image_builder/assembly_creating-and-uploading-a-customized-rhel-vmdk-system-image-to-vsphere
Chapter 11. Log storage
Chapter 11. Log storage 11.1. About log storage You can use an internal Loki or Elasticsearch log store on your cluster for storing logs, or you can use a ClusterLogForwarder custom resource (CR) to forward logs to an external store. 11.1.1. Log storage types Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Elasticsearch indexes incoming log records completely during ingestion. Loki indexes only a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. 11.1.1.1. About the Elasticsearch log store The logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system. Elasticsearch organizes the log data from Fluentd into datastores, or indices , then subdivides each index into multiple pieces called shards , which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called replicas , which Elasticsearch also spreads across the Elasticsearch nodes. The ClusterLogging custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the ClusterLogging CR. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume. You can use a ClusterLogging custom resource (CR) to increase the number of Elasticsearch nodes, as needed. See the Elasticsearch documentation for considerations involved in configuring storage. Note A highly-available Elasticsearch environment requires at least three Elasticsearch nodes, each on a different host. Role-based access control (RBAC) applied on the Elasticsearch indices enables the controlled access of the logs to the developers. Administrators can access all logs and developers can access only the logs in their projects. 11.1.2. Querying log stores You can query Loki by using the LogQL log query language . 11.1.3. Additional resources Loki components documentation Loki Object Storage documentation 11.2. Installing log storage You can use the OpenShift CLI ( oc ) or the OpenShift Container Platform web console to deploy a log store on your OpenShift Container Platform cluster. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 11.2.1. Deploying a Loki log store You can use the Loki Operator to deploy an internal Loki log store on your OpenShift Container Platform cluster. After install the Loki Operator, you must configure Loki object storage by creating a secret, and create a LokiStack custom resource (CR). 11.2.1.1. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. Important It is not possible to change the number 1x for the deployment size. Table 11.1. Loki sizing 1x.demo 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 Total CPU requests None 14 vCPUs 34 vCPUs 54 vCPUs Total CPU requests if using the ruler None 16 vCPUs 42 vCPUs 70 vCPUs Total memory requests None 31Gi 67Gi 139Gi Total memory requests if using the ruler None 35Gi 83Gi 171Gi Total disk requests 40Gi 430Gi 430Gi 590Gi Total disk requests if using the ruler 80Gi 750Gi 750Gi 910Gi 11.2.1.2. Installing Logging and the Loki Operator using the web console To install and configure logging on your OpenShift Container Platform cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the Operator Hub within the web console. Prerequisites You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). You have administrator permissions. You have access to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable or stable-x.y as the Update channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Install the Red Hat OpenShift Logging Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Ensure that the A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. Select stable-5.y as the Update Channel . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Important It is not possible to change the number 1x for the deployment size. Click Create . Create an OpenShift Logging instance: Switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition details page, select View Instances from the Actions menu. On the ClusterLoggings page, click Create ClusterLogging . You might have to refresh the page to load the data. In the YAML field, replace the code with the following: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Verification Go to Operators Installed Operators . Make sure the openshift-logging project is selected. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date . Note An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page. 11.2.1.3. Creating a secret for Loki object storage by using the web console To configure Loki object storage, you must create a secret. You can create a secret by using the OpenShift Container Platform web console. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You installed the Loki Operator. Procedure Go to Workloads Secrets in the Administrator perspective of the OpenShift Container Platform web console. From the Create drop-down list, select From YAML . Create a secret that uses the access_key_id and access_key_secret fields to specify your credentials and the bucketnames , endpoint , and region fields to define the object storage location. AWS is used in the following example: Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 Additional resources Loki object storage 11.2.2. Deploying a Loki log store on a cluster that uses short-term credentials For some storage providers, you can use the CCO utility ( ccoctl ) during installation to implement short-term credentials. These credentials are created and managed outside the OpenShift Container Platform cluster. Manual mode with short-term credentials for components . Note Short-term credential authentication must be configured during a new installation of Loki Operator, on a cluster that uses this credentials strategy. You cannot configure an existing cluster that uses a different credentials strategy to use this feature. 11.2.2.1. Workload identity federation Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Prerequisites OpenShift Container Platform 4.14 and later Logging 5.9 and later Procedure If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-5.9" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-5.9" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 11.2.2.2. Creating a LokiStack custom resource by using the web console You can create a LokiStack custom resource (CR) by using the OpenShift Container Platform web console. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You installed the Loki Operator. Procedure Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 1 Use the name logging-loki . 2 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 6 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 11.2.2.3. Installing Logging and the Loki Operator using the CLI To install and configure logging on your OpenShift Container Platform cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OpenShift Container Platform CLI. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Create a Namespace object for Loki Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, which would cause conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Loki Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a namespace object for the Red Hat OpenShift Logging Operator: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-logging: "true" openshift.io/cluster-monitoring: "true" 2 1 The Red Hat OpenShift Logging Operator is only deployable to the openshift-logging namespace. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 1 You must specify the openshift-logging namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-logging namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR object by running the following command: USD oc apply -f <filename>.yaml Create a ClusterLogging CR object: Example ClusterLogging CR object apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Apply the ClusterLogging CR object by running the following command: USD oc apply -f <filename>.yaml Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 11.2.2.4. Creating a secret for Loki object storage by using the CLI To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a secret in the directory that contains your certificate and key files by running the following command: USD oc create secret generic -n openshift-logging <your_secret_name> \ --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password> Note Use generic or opaque secrets for best results. Verification Verify that a secret was created by running the following command: USD oc get secrets Additional resources Loki object storage 11.2.2.5. Creating a LokiStack custom resource by using the CLI You can create a LokiStack custom resource (CR) by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 1 Use the name logging-loki . 2 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 6 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. Apply the LokiStack CR by running the following command: Verification Verify the installation by listing the pods in the openshift-logging project by running the following command and observing the output: USD oc get pods -n openshift-logging Confirm that you see several pods for components of the logging, similar to the following list: Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s 11.2.3. Loki object storage The Loki Operator supports AWS S3 , as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation . Azure , GCS , and Swift are also supported. The recommended nomenclature for Loki storage is logging-loki- <your_storage_provider> . The following table shows the type values within the LokiStack custom resource (CR) for each storage provider. For more information, see the section on your storage provider. Table 11.2. Secret type quick reference Storage provider Secret type value AWS s3 Azure azure Google Cloud gcs Minio s3 OpenShift Data Foundation s3 Swift swift 11.2.3.1. AWS storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on AWS. You created an AWS IAM Policy and IAM User . Procedure Create an object storage secret with the name logging-loki-aws by running the following command: USD oc create secret generic logging-loki-aws \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" 11.2.3.1.1. AWS storage for STS enabled clusters If your cluster has STS enabled, the Cloud Credential Operator (CCO) supports short-term authentication using AWS tokens. You can create the Loki object storage secret manually by running the following command: USD oc -n openshift-logging create secret generic "logging-loki-aws" \ --from-literal=bucketnames="<s3_bucket_name>" \ --from-literal=region="<bucket_region>" \ --from-literal=audience="<oidc_audience>" 1 1 Optional annotation, default value is openshift . 11.2.3.2. Azure storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Azure. Procedure Create an object storage secret with the name logging-loki-azure by running the following command: USD oc create secret generic logging-loki-azure \ --from-literal=container="<azure_container_name>" \ --from-literal=environment="<azure_environment>" \ 1 --from-literal=account_name="<azure_account_name>" \ --from-literal=account_key="<azure_account_key>" 1 Supported environment values are AzureGlobal , AzureChinaCloud , AzureGermanCloud , or AzureUSGovernment . 11.2.3.2.1. Azure storage for Microsoft Entra Workload ID enabled clusters If your cluster has Microsoft Entra Workload ID enabled, the Cloud Credential Operator (CCO) supports short-term authentication using Workload ID. You can create the Loki object storage secret manually by running the following command: USD oc -n openshift-logging create secret generic logging-loki-azure \ --from-literal=environment="<azure_environment>" \ --from-literal=account_name="<storage_account_name>" \ --from-literal=container="<container_name>" 11.2.3.3. Google Cloud Platform storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a project on Google Cloud Platform (GCP). You created a bucket in the same project. You created a service account in the same project for GCP authentication. Procedure Copy the service account credentials received from GCP into a file called key.json . Create an object storage secret with the name logging-loki-gcs by running the following command: USD oc create secret generic logging-loki-gcs \ --from-literal=bucketname="<bucket_name>" \ --from-file=key.json="<path/to/key.json>" 11.2.3.4. Minio storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You have Minio deployed on your cluster. You created a bucket on Minio. Procedure Create an object storage secret with the name logging-loki-minio by running the following command: USD oc create secret generic logging-loki-minio \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<minio_bucket_endpoint>" \ --from-literal=access_key_id="<minio_access_key_id>" \ --from-literal=access_key_secret="<minio_access_key_secret>" 11.2.3.5. OpenShift Data Foundation storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You deployed OpenShift Data Foundation . You configured your OpenShift Data Foundation cluster for object storage . Procedure Create an ObjectBucketClaim custom resource in the openshift-logging namespace: apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io Get bucket properties from the associated ConfigMap object by running the following command: BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}') Get bucket access key from the associated secret by running the following command: ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d) Create an object storage secret with the name logging-loki-odf by running the following command: USD oc create -n openshift-logging secret generic logging-loki-odf \ --from-literal=access_key_id="<access_key_id>" \ --from-literal=access_key_secret="<secret_access_key>" \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="https://<bucket_host>:<bucket_port>" 11.2.3.6. Swift storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Swift. Procedure Create an object storage secret with the name logging-loki-swift by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" You can optionally provide project-specific data, region, or both by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" \ --from-literal=project_id="<swift_project_id>" \ --from-literal=project_name="<swift_project_name>" \ --from-literal=project_domain_id="<swift_project_domain_id>" \ --from-literal=project_domain_name="<swift_project_domain_name>" \ --from-literal=region="<swift_region>" 11.2.4. Deploying an Elasticsearch log store You can use the OpenShift Elasticsearch Operator to deploy an internal Elasticsearch log store on your OpenShift Container Platform cluster. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 11.2.4.1. Storage considerations for Elasticsearch A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Container Platform this is achieved using persistent volume claims (PVCs). Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Fluentd ships any logs from systemd journal and /var/log/containers/*.log to Elasticsearch. Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity. By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED. Note These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. 11.2.4.2. Installing the OpenShift Elasticsearch Operator by using the web console The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. Prerequisites Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node. Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments. Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Click OpenShift Elasticsearch Operator from the list of available Operators, and click Install . Ensure that the All namespaces on the cluster is selected under Installation mode . Ensure that openshift-operators-redhat is selected under Installed Namespace . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as OpenShift Container Platform metric, which would cause conflicts. Select Enable operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select stable-5.x as the Update channel . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators Installed Operators page. Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded . 11.2.4.3. Installing the OpenShift Elasticsearch Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Elasticsearch Operator. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Elasticsearch is a memory-intensive application. By default, OpenShift Container Platform installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OpenShift Container Platform nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. You have administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Create a Namespace object as a YAML file: apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as metric, which would cause conflicts. 2 String. You must specify this label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object as a YAML file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {} 1 You must specify the openshift-operators-redhat namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe the namespace to the OpenShift Elasticsearch Operator: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-x.y as the channel. See the following note. 3 Automatic allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. Manual requires a user with appropriate credentials to approve the Operator update. 4 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM). Note Specifying stable installs the current version of the latest stable release. Using stable with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable major and minor release. Specifying stable-x.y installs the current minor version of a specific major release. Using stable-x.y with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable minor release within the major release. Apply the subscription by running the following command: USD oc apply -f <filename>.yaml The OpenShift Elasticsearch Operator is installed to the openshift-operators-redhat namespace and copied to each project in the cluster. Verification Run the following command: USD oc get csv -n --all-namespaces Observe the output and confirm that pods for the OpenShift Elasticsearch Operator exist in each namespace Example output NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded ... 11.2.5. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 11.3. Configuring the LokiStack log store In logging documentation, LokiStack refers to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store. 11.3.1. Creating a new group for the cluster-admin user role Important Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120) . For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it. Use the following procedure to create a new group for users with cluster-admin permissions. Procedure Enter the following command to create a new group: USD oc adm groups new cluster-admin Enter the following command to add the desired user to the cluster-admin group: USD oc adm groups add-users cluster-admin <username> Enter the following command to add cluster-admin user role to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admin 11.3.2. LokiStack behavior during cluster restarts In logging version 5.8 and newer versions, when an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. Additional resources Pod disruption budgets Kubernetes documentation 11.3.3. Configuring Loki to tolerate node failure In the logging 5.8 and later versions, the Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. Additional resources PodAntiAffinity v1 core Kubernetes documentation Assigning Pods to Nodes Kubernetes documentation Placing pods relative to other pods using affinity and anti-affinity rules 11.3.4. Zone aware data replication In the logging 5.8 and later versions, the Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small , 1x.small , or 1x.medium, the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 11.3.4.1. Recovering Loki pods from failed zones In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster isn't configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Logging version 5.8 or later. Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: oc delete pvc __<pvc_name>__ -n openshift-logging Then delete the pod(s) by running the following command: oc delete pod __<pod_name>__ -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 11.3.4.1.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. oc patch pvc __<pvc_name>__ -p '{"metadata":{"finalizers":null}}' -n openshift-logging Additional resources Topology spread constraints Kubernetes documentation Kubernetes storage documentation . Controlling pod placement by using pod topology spread constraints 11.3.5. Fine grained access for Loki logs In logging 5.8 and later, the Red Hat OpenShift Logging Operator does not grant all users access to logs by default. As an administrator, you must configure your users' access unless the Operator was upgraded and prior configurations are in place. Depending on your configuration and need, you can configure fine grain access to logs using the following: Cluster wide policies Namespace scoped policies Creation of custom admin groups As an administrator, you need to create the role bindings and cluster role bindings appropriate for your deployment. The Red Hat OpenShift Logging Operator provides the following cluster roles: cluster-logging-application-view grants permission to read application logs. cluster-logging-infrastructure-view grants permission to read infrastructure logs. cluster-logging-audit-view grants permission to read audit logs. If you have upgraded from a prior version, an additional cluster role logging-application-logs-reader and associated cluster role binding logging-all-authenticated-application-logs-reader provide backward compatibility, allowing any authenticated user read access in their namespaces. Note Users with access by namespace must provide a namespace when querying application logs. 11.3.5.1. Cluster wide access Cluster role binding resources reference cluster roles, and set permissions cluster wide. Example ClusterRoleBinding kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io 1 Additional ClusterRoles are cluster-logging-infrastructure-view , and cluster-logging-audit-view . 2 Specifies the users or groups this object applies to. 11.3.5.2. Namespaced access RoleBinding resources can be used with ClusterRole objects to define the namespace a user or group has access to logs for. Example RoleBinding kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0 1 Specifies the namespace this RoleBinding applies to. 11.3.5.3. Custom admin group access If you have a large deployment with several users who require broader permissions, you can create a custom group using the adminGroup field. Users who are members of any group specified in the adminGroups field of the LokiStack CR are considered administrators. Administrator users have access to all application logs in all namespaces, if they also get assigned the cluster-logging-application-view role. Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3 1 Custom admin groups are only available in this mode. 2 Entering an empty list [] value for this field disables admin groups. 3 Overrides the default groups ( system:cluster-admins , cluster-admin , dedicated-admin ) Additional resources Using RBAC to define and apply permissions 11.3.6. Enabling stream-based retention with Loki With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Although logging version 5.9 and higher supports schema v12, v13 is recommended. To enable stream-based retention, create a LokiStack CR: Example global stream-based retention apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream. Example per-tenant stream-based retention apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml Note This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. 11.3.7. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 11.3.8. Configuring Loki to tolerate memberlist creation failure In an OpenShift cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack CR to use the podIP in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP","type": "memberlist"}}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 11.3.9. Additional resources Loki components documentation Loki Query Language (LogQL) documentation Grafana Dashboard documentation Loki Object Storage documentation Loki Operator IngestionLimitSpec documentation Loki Storage Schema documentation 11.4. Configuring the Elasticsearch log store You can use Elasticsearch 6 to store and organize log data. You can make modifications to your log store, including: Storage for your Elasticsearch cluster Shard replication across data nodes in the cluster, from full replication to no replication External access to Elasticsearch data 11.4.1. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 11.4.2. Forwarding audit logs to the log store In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the ClusterLogging custom resource (CR) by default. Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured. If this default configuration meets your needs, you do not need to configure a ClusterLogForwarder CR. If a ClusterLogForwarder CR exists, logs are not forwarded to the internal log store unless a pipeline is defined that contains the default output. Procedure To use the Log Forward API to forward audit logs to the internal Elasticsearch instance: Create or edit a YAML file that defines the ClusterLogForwarder CR object: Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default 1 A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance. Note You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: "elasticsearch" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: "fluentdForward" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1 1 This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance. Additional resources About log collection and forwarding 11.4.3. Configuring log retention time You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs. To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices. Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions: The index is older than the rollover.maxAge value in the Elasticsearch CR. The index size is greater than 40 GB x the number of primary shards. The index doc count is greater than 40960 KB x the number of primary shards. Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default. Prerequisites The Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator must be installed. Procedure To configure the log retention time: Edit the ClusterLogging CR to add or modify the retentionPolicy parameter: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ... 1 Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days. You can verify the settings in the Elasticsearch custom resource (CR). For example, the Red Hat OpenShift Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over. apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ... 1 For each log source, the retention policy indicates when to delete and roll over logs for that source. 2 When OpenShift Container Platform deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR. 3 The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR. 4 When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed. Note Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR. The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval . USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s 11.4.4. Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. Note In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: 1 resources: limits: 2 memory: "32Gi" requests: 3 cpu: "1" memory: "16Gi" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi 1 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 2 The maximum amount of resources a pod can use. 3 The minimum resources required to schedule a pod. 4 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits . For example: resources: limits: 1 memory: "32Gi" requests: 2 cpu: "8" memory: "32Gi" 1 The maximum amount of the resource. 2 The minimum amount required. Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. 11.4.5. Configuring replication policy for the log store You can define how Elasticsearch shards are replicated across data nodes in the cluster. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1 1 Specify a redundancy policy for the shards. The change is applied upon saving the changes. FullRedundancy . Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance. MultipleRedundancy . Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance. SingleRedundancy . Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. ZeroRedundancy . Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. 11.4.6. Scaling down Elasticsearch pods Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation. If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green , you can scale down by another pod. Note If your Elasticsearch cluster is set to ZeroRedundancy , you should not scale down your Elasticsearch pods. 11.4.7. Configuring persistent storage for the log store Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. Warning Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" # ... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G" This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. 11.4.8. Configuring the log store for emptyDir storage You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod's data is lost upon restart. Note When using emptyDir, if log storage is restarted or redeployed, you will lose data. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify emptyDir: spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {} 11.4.9. Performing an Elasticsearch rolling cluster restart Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure To perform a rolling cluster restart: Change to the openshift-logging project: Get the names of the Elasticsearch pods: Scale down the collector pods so they stop sending new logs to Elasticsearch: USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}' Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: USD oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST For example: Example output Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool: For example: Example output {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient": After the command is complete, for each deployment you have for an ES cluster: By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: For example: Example output A new pod is deployed. After the pod has a ready container, you can move on to the deployment. Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h After the deployments are complete, reset the pod to disallow rollouts: For example: Example output Check that the Elasticsearch cluster is in a green or yellow state: Note If you performed a rollout on the Elasticsearch pod you used in the commands, the pod no longer exists and you need a new pod name here. For example: 1 Make sure this parameter value is green or yellow before proceeding. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod. After all the deployments for the cluster have been rolled out, re-enable shard balancing: For example: Example output { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } } Scale up the collector pods so they send new logs to Elasticsearch. USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}' 11.4.10. Exposing the log store service as a route By default, the log store that is deployed with logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data. Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains: The Authorization: Bearer USD{token} The Elasticsearch reencrypt route and an Elasticsearch API request . Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands: USD oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging Example output 172.30.183.229 USD oc get service elasticsearch -n openshift-logging Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h You can check the cluster IP address with a command similar to the following: USD oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://172.30.183.229:9200/_cat/health" Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108 Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. You must have access to the project to be able to access to the logs. Procedure To expose the log store externally: Change to the openshift-logging project: USD oc project openshift-logging Extract the CA certificate from the log store and write to the admin-ca file: USD oc extract secret/elasticsearch --to=. --keys=admin-ca Example output admin-ca Create the route for the log store service as a YAML file: Create a YAML file with the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1 1 Add the log store CA certifcate or use the command in the step. You do not have to set the spec.tls.key , spec.tls.certificate , and spec.tls.caCertificate parameters required by some reencrypt routes. Run the following command to add the log store CA certificate to the route YAML you created in the step: USD cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml Create the route: USD oc create -f <file-name>.yaml Example output route.route.openshift.io/elasticsearch created Check that the Elasticsearch service is exposed: Get the token of this service account to be used in the request: USD token=USD(oc whoami -t) Set the elasticsearch route you created as an environment variable. USD routeES=`oc get route elasticsearch -o jsonpath={.spec.host}` To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://USD{routeES}" The response appears similar to the following: Example output { "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" } 11.4.11. Removing unused components if you do not use the default Elasticsearch log store As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster. In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default . For example: outputRefs: - default Warning Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: type: "fluentd" fluentd: {} Verify that the collector pods are redeployed: USD oc get pods -l component=collector -n openshift-logging
[ "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed", "apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed", "oc apply -f <filename>.yaml", "oc get pods -n openshift-logging", "oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m", "oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>", "oc get secrets", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging", "oc get pods -n openshift-logging", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s", "oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"", "oc -n openshift-logging create secret generic \"logging-loki-aws\" --from-literal=bucketnames=\"<s3_bucket_name>\" --from-literal=region=\"<bucket_region>\" --from-literal=audience=\"<oidc_audience>\" 1", "oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"", "oc -n openshift-logging create secret generic logging-loki-azure --from-literal=environment=\"<azure_environment>\" --from-literal=account_name=\"<storage_account_name>\" --from-literal=container=\"<container_name>\"", "oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"", "oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io", "BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')", "ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)", "oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"", "oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"", "oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator", "oc apply -f <filename>.yaml", "oc get csv -n --all-namespaces", "NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki", "oc apply -f <filename>.yaml", "oc adm groups new cluster-admin", "oc adm groups add-users cluster-admin <username>", "oc adm policy add-cluster-role-to-group cluster-admin cluster-admin", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4", "get pods --field-selector status.phase==Pending -n openshift-logging", "NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m", "get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r", "storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1", "delete pvc __<pvc_name>__ -n openshift-logging", "delete pod __<pod_name>__ -n openshift-logging", "patch pvc __<pvc_name>__ -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging", "oc apply -f <filename>.yaml", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded", "2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true", "2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"", "level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2", "oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\",\"type\": \"memberlist\"}}}}'", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3", "apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s", "oc edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi", "resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "oc project openshift-logging", "oc get pods -l component=elasticsearch", "oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST", "oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST", "{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'", "{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":", "oc rollout resume deployment/<deployment-name>", "oc rollout resume deployment/elasticsearch-cdm-0-1", "deployment.extensions/elasticsearch-cdm-0-1 resumed", "oc get pods -l component=elasticsearch-", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h", "oc rollout pause deployment/<deployment-name>", "oc rollout pause deployment/elasticsearch-cdm-0-1", "deployment.extensions/elasticsearch-cdm-0-1 paused", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'", "{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }", "oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'", "oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging", "172.30.183.229", "oc get service elasticsearch -n openshift-logging", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h", "oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108", "oc project openshift-logging", "oc extract secret/elasticsearch --to=. --keys=admin-ca", "admin-ca", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1", "cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml", "oc create -f <file-name>.yaml", "route.route.openshift.io/elasticsearch created", "token=USD(oc whoami -t)", "routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`", "curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"", "{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }", "outputRefs: - default", "oc edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}", "oc get pods -l component=collector -n openshift-logging" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/log-storage-2
Chapter 9. Installing a cluster on IBM Cloud in a restricted network
Chapter 9. Installing a cluster on IBM Cloud in a restricted network In OpenShift Container Platform 4.16, you can install a cluster in a restricted network by creating an internal mirror of the installation release content that is accessible to an existing Virtual Private Cloud (VPC) on IBM Cloud(R). 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You configured an IBM Cloud account to host the cluster. You have a container image registry that is accessible to the internet and your restricted network. The container image registry should mirror the contents of the OpenShift image registry and contain the installation media. For more information, see Mirroring images for a disconnected installation using the oc-mirror plugin . You have an existing VPC on IBM Cloud(R) that meets the following requirements: The VPC contains the mirror registry or has firewall rules or a peering connection to access the mirror registry that is hosted elsewhere. The VPC can access IBM Cloud(R) service endpoints using a public endpoint. If network restrictions limit access to public service endpoints, evaluate those services for alternate endpoints that might be available. For more information see Access to IBM service endpoints . You cannot use the VPC that the installation program provisions by default. If you plan on configuring endpoint gateways to use IBM Cloud(R) Virtual Private Endpoints, consider the following requirements: Endpoint gateway support is currently limited to the us-east and us-south regions. The VPC must allow traffic to and from the endpoint gateways. You can use the VPC's default security group, or a new security group, to allow traffic on port 443. For more information, see Allowing endpoint gateway traffic . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 9.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. 9.2.1. Required internet access and an installation host You complete the installation using a bastion host or portable device that can access both the internet and your closed network. You must use a host with internet access to: Download the installation program, the OpenShift CLI ( oc ), and the CCO utility ( ccoctl ). Use the installation program to locate the Red Hat Enterprise Linux CoreOS (RHCOS) image and create the installation configuration file. Use oc to extract ccoctl from the CCO container image. Use oc and ccoctl to configure IAM for IBM Cloud(R). 9.2.2. Access to a mirror registry To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your restricted network, or by using other methods that meet your organization's security restrictions. For more information on mirroring images for a disconnected installation, see "Additional resources". 9.2.3. Access to IBM service endpoints The installation program requires access to the following IBM Cloud(R) service endpoints: Cloud Object Storage DNS Services Global Search Global Tagging Identity Services Resource Controller Resource Manager VPC Note If you are specifying an IBM(R) Key Protect for IBM Cloud(R) root key as part of the installation process, the service endpoint for Key Protect is also required. By default, the public endpoint is used to access the service. If network restrictions limit access to public service endpoints, you can override the default behavior. Before deploying the cluster, you can update the installation configuration file ( install-config.yaml ) to specify the URI of an alternate service endpoint. For more information on usage, see "Additional resources". 9.2.4. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. Additional resources Mirroring images for a disconnected installation using the oc-mirror plugin Additional IBM Cloud configuration parameters 9.3. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 9.3.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 9.3.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to the existing VPC. As part of the installation, specify the following in the install-config.yaml file: The name of the existing resource group that contains the VPC and subnets ( networkResourceGroupName ) The name of the existing VPC ( vpcName ) The subnets that were created for control plane machines and compute machines ( controlPlaneSubnets and computeSubnets ) Note Additional installer-provisioned cluster resources are deployed to a separate resource group ( resourceGroupName ). You can specify this resource group before installing the cluster. If undefined, a new resource group is created for the cluster. To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 9.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 9.3.4. Allowing endpoint gateway traffic If you are using IBM Cloud(R) Virtual Private endpoints, your Virtual Private Cloud (VPC) must be configured to allow traffic to and from the endpoint gateways. A VPC's default security group is configured to allow all outbound traffic to endpoint gateways. Therefore, the simplest way to allow traffic between your VPC and endpoint gateways is to modify the default security group to allow inbound traffic on port 443. Note If you choose to configure a new security group, the security group must be configured to allow both inbound and outbound traffic. Prerequisites You have installed the IBM Cloud(R) Command Line Interface utility ( ibmcloud ). Procedure Obtain the identifier for the default security group by running the following command: USD DEFAULT_SG=USD(ibmcloud is vpc <your_vpc_name> --output JSON | jq -r '.default_security_group.id') Add a rule that allows inbound traffic on port 443 by running the following command: USD ibmcloud is security-group-rule-add USDDEFAULT_SG inbound tcp --remote 0.0.0.0/0 --port-min 443 --port-max 443 Note Be sure that your endpoint gateways are configured to use this security group. 9.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 9.6. Downloading the RHCOS cluster image The installation program requires the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. While optional, downloading the Red Hat Enterprise Linux CoreOS (RHCOS) before deploying removes the need for internet access when creating the cluster. Use the installation program to locate and download the Red Hat Enterprise Linux CoreOS (RHCOS) image. Prerequisites The host running the installation program has internet access. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the IBM Cloud(R) image. .Example output ---- "release": "415.92.202311241643-0", "formats": { "qcow2.gz": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.15-9.2/builds/415.92.202311241643-0/x86_64/rhcos-415.92.202311241643-0-ibmcloud.x86_64.qcow2.gz", "sha256": "6b562dee8431bec3b93adeac1cfefcd5e812d41e3b7d78d3e28319870ffc9eae", "uncompressed-sha256": "5a0f9479505e525a30367b6a6a6547c86a8f03136f453c1da035f3aa5daa8bc9" ---- Download and extract the image archive. Make the image available on the host that the installation program uses to create the cluster. 9.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have obtained the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . When customizing the sample template, be sure to provide the information that is required for an installation in a restricted network: Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.ibmcloud field: vpcName: <existing_vpc> controlPlaneSubnets: <control_plane_subnet> computeSubnets: <compute_subnet> For platform.ibmcloud.vpcName , specify the name for the existing IBM Cloud VPC. For platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. If network restrictions limit the use of public endpoints to access the required IBM Cloud(R) services, add the serviceEndpoints stanza to platform.ibmcloud to specify an alternate service endpoint. Note You can specify only one alternate service endpoint for each service. Example of using alternate services endpoints # ... serviceEndpoints: - name: IAM url: <iam_alternate_endpoint_url> - name: VPC url: <vpc_alternate_endpoint_url> - name: ResourceController url: <resource_controller_alternate_endpoint_url> - name: ResourceManager url: <resource_manager_alternate_endpoint_url> - name: DNSServices url: <dns_services_alternate_endpoint_url> - name: COS url: <cos_alternate_endpoint_url> - name: GlobalSearch url: <global_search_alternate_endpoint_url> - name: GlobalTagging url: <global_tagging_alternate_endpoint_url> # ... Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Note If you use the default value of External , your network must be able to access the public endpoint for IBM Cloud(R) Internet Services (CIS). CIS is not enabled for Virtual Private Endpoints. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 9.7.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources Installation configuration parameters for IBM Cloud(R) 9.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 9.7.3. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 9.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 mx2-8x64 mx2d-4x32 mx3d-2x20 ox2-4x32 ox2-8x64 ux2d-2x56 vx2d-4x56 9.7.4. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibm-cloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-east 12 resourceGroupName: us-east-example-cluster-rg 13 serviceEndpoints: 14 - name: IAM url: https://private.us-east.iam.cloud.ibm.com - name: VPC url: https://us-east.private.iaas.cloud.ibm.com/v1 - name: ResourceController url: https://private.us-east.resource-controller.cloud.ibm.com - name: ResourceManager url: https://private.us-east.resource-controller.cloud.ibm.com - name: DNSServices url: https://api.private.dns-svcs.cloud.ibm.com/v1 - name: COS url: https://s3.direct.us-east.cloud-object-storage.appdomain.cloud - name: GlobalSearch url: https://api.private.global-search-tagging.cloud.ibm.com - name: GlobalTagging url: https://tags.private.global-search-tagging.cloud.ibm.com networkResourceGroupName: us-east-example-existing-network-rg 15 vpcName: us-east-example-network-1 16 controlPlaneSubnets: 17 - us-east-example-network-1-cp-us-east-1 - us-east-example-network-1-cp-us-east-2 - us-east-example-network-1-cp-us-east-3 computeSubnets: 18 - us-east-example-network-1-compute-us-east-1 - us-east-example-network-1-compute-us-east-2 - us-east-example-network-1-compute-us-east-3 credentialsMode: Manual pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 additionalTrustBundle: | 22 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 23 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 8 12 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 14 Based on the network restrictions of the VPC, specify alternate service endpoints as needed. This overrides the default public endpoint for the service. 15 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 16 Specify the name of an existing VPC. 17 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 19 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000. For <credentials> , specify the base64-encoded user name and password for your mirror registry. 20 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated or Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 21 Optional: provide the sshKey value that you use to access the machines in your cluster. 22 Provide the contents of the certificate file that you used for your mirror registry. 23 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 9.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.9. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 9.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. If the Red Hat Enterprise Linux CoreOS (RHCOS) image is available locally, the host running the installation program does not require internet access. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Export the OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE variable to specify the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image by running the following command: USD export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE="<path_to_image>/rhcos-<image_version>-ibmcloud.x86_64.qcow2.gz" Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 9.12. Post installation Complete the following steps to complete the configuration of your cluster. 9.12.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 9.12.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 9.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 9.14. steps Customize your cluster . Optional: Opt out of remote health reporting .
[ "DEFAULT_SG=USD(ibmcloud is vpc <your_vpc_name> --output JSON | jq -r '.default_security_group.id')", "ibmcloud is security-group-rule-add USDDEFAULT_SG inbound tcp --remote 0.0.0.0/0 --port-min 443 --port-max 443", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export IC_API_KEY=<api_key>", "./openshift-install coreos print-stream-json", ".Example output ---- \"release\": \"415.92.202311241643-0\", \"formats\": { \"qcow2.gz\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.15-9.2/builds/415.92.202311241643-0/x86_64/rhcos-415.92.202311241643-0-ibmcloud.x86_64.qcow2.gz\", \"sha256\": \"6b562dee8431bec3b93adeac1cfefcd5e812d41e3b7d78d3e28319870ffc9eae\", \"uncompressed-sha256\": \"5a0f9479505e525a30367b6a6a6547c86a8f03136f453c1da035f3aa5daa8bc9\" ----", "mkdir <installation_directory>", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "vpcName: <existing_vpc> controlPlaneSubnets: <control_plane_subnet> computeSubnets: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "serviceEndpoints: - name: IAM url: <iam_alternate_endpoint_url> - name: VPC url: <vpc_alternate_endpoint_url> - name: ResourceController url: <resource_controller_alternate_endpoint_url> - name: ResourceManager url: <resource_manager_alternate_endpoint_url> - name: DNSServices url: <dns_services_alternate_endpoint_url> - name: COS url: <cos_alternate_endpoint_url> - name: GlobalSearch url: <global_search_alternate_endpoint_url> - name: GlobalTagging url: <global_tagging_alternate_endpoint_url>", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibm-cloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-east 12 resourceGroupName: us-east-example-cluster-rg 13 serviceEndpoints: 14 - name: IAM url: https://private.us-east.iam.cloud.ibm.com - name: VPC url: https://us-east.private.iaas.cloud.ibm.com/v1 - name: ResourceController url: https://private.us-east.resource-controller.cloud.ibm.com - name: ResourceManager url: https://private.us-east.resource-controller.cloud.ibm.com - name: DNSServices url: https://api.private.dns-svcs.cloud.ibm.com/v1 - name: COS url: https://s3.direct.us-east.cloud-object-storage.appdomain.cloud - name: GlobalSearch url: https://api.private.global-search-tagging.cloud.ibm.com - name: GlobalTagging url: https://tags.private.global-search-tagging.cloud.ibm.com networkResourceGroupName: us-east-example-existing-network-rg 15 vpcName: us-east-example-network-1 16 controlPlaneSubnets: 17 - us-east-example-network-1-cp-us-east-1 - us-east-example-network-1-cp-us-east-2 - us-east-example-network-1-cp-us-east-3 computeSubnets: 18 - us-east-example-network-1-compute-us-east-1 - us-east-example-network-1-compute-us-east-2 - us-east-example-network-1-compute-us-east-3 credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 additionalTrustBundle: | 22 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 23 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE=\"<path_to_image>/rhcos-<image_version>-ibmcloud.x86_64.qcow2.gz\"", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_cloud/installing-ibm-cloud-restricted
Chapter 7. Bug fixes
Chapter 7. Bug fixes This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.18. 7.1. Disaster recovery Volsync in DR dashboard reports operator degraded Previously, Red Hat Advanced Cluster Management for Kubernetes (RHACM) 2.13 deployed the Volsync operator on a managed cluster without creating the ClusterServiceVersion (CSV) custom resource (CR). As a result, OpenShift did not generate csv_succeeded metrics for Volsync and hence the ODF-DR dashboard did not display the health status of the Volsync operator. With this fix, for Volsync , the csv_succeeded metric is replaced with kube_running_pod_ready . Therefore, the RHACM metrics whitelisting ConfigMap is updated and the ODF-DR dashboard is able to monitor the health of the Volsync operator effectively. ( DFBUGS-1293 ) Replication using Volsync requires PVC to be mounted before PVC is synchronized Previously, a PVC which was not mounted would not be synced to the secondary cluster. With this fix, ODF-DR syncs the PVC even when it is not part of the PVCLabelSelector . ( DFBUGS-580 ) 7.2. Multicloud Object Gateway Attempting to delete a bucketclass or OBC that do not exist does not result in an error in MCG CLI Previously, an attempt to delete a bucketclass or object bucket claim (OBC) that does not exist using the MCG CLI did not result in an error. With this fix, error messages on CLI deletion of bucketclasses and OBCs are improved. ( DFBUGS-201 ) 502 Bad Gateway observed on s3 get operation: noobaa is throwing error at 'MapClient.read_chunks: chunk ERROR Error: had chunk errors chunk Previously, the object was corrupted due to a race condition within MCG between a canceled part of an upload and the dedup flow finding a match. The said part would be flagged as a duplicate and then canceled and reclaimed leaving the second duped part pointing to a reclaimed data which is no longer valid. With this fix, deduping with chunks that are not yet marked as finished uploads is avoided and a time buffer is added after completion to ensure chunks are alive and can be deduped into. ( DFBUGS-216 ) Namespace store stuck in rejected state Previously, during monitoring of NSStore when MCG tries to verify access and existence of the target bucket, certain errors were not ignored even though they should have been ignored. With this fix, issue report on read-object_md is prevented when the object does not exist. ( DFBUGS-700 ) Updating bucket quota always result in 1PB quota limit Previously, MCG bucket quota resulted in a 1PB quota limit regardless of the desired value. With this fix, the correct value is set on the bucket quota limit. ( DFBUGS-1173 ) Using PutObject via boto3 >= 1.36.0 results in InvalidDigest error Previously, PUT requests with clients that used the upgraded AWS SDK or CLI resulted in error because AWS SDK or CLI changed the default S3 client behavior to always calculate a checksum by default for operations that support it. With this fix, the PUT requests from S3 clients are allowed with the changed behavior. ( DFBUGS-1513 ) 7.3. Ceph with panic_on_warn set, the kernel ceph fs module panicked in ceph_fill_file_size Previously, kernel panic with the note not syncing: panic-on_warn_set occurred due to a specific hard-to-reproduce CephFS scenario. With this fix, the RHEL kernel was fixed and as a result, the specific CephFS scenario no longer occurs. ( DFBUGS-551 ) 7.4. Ceph container storage interface (CSI) operator ceph-csi-controller-manager pods OOMKilled Previously, ceph-csi-controller-manager pods were OOMKilled because these pods tried to cache all configmaps in the cluster on installing OpenShift Data Foundation. With this fix, the cache is scoped only to the namespace where ceph-csi-controller-manager pod is running. As a result, memory usage by pods is stable and pods are not OOMKilled . ( DFBUGS-938 ) 7.5. OCS Operator rook-ceph-mds pods scheduled on the same node as placement anti-affinity is preferred , not required Previously, MDS pods for an active MDS daemon could be scheduled in the same failure domain, as MDS pods had preferred pod anti-affinity. With this fix, for activeMDS = 1 , required anti-affinity is applied. For activeMDS > 1 , preferred anti-affinity remains. As a result when activeMDS = 1 , the two MDS pods of the active daemon will have required anti-affinity, ensuring they are not scheduled in the same failure domain and when activeMDS >1 , the anti affinity will be preferred and MDS active and standby pair can be scheduled on the same nodes. ( DFBUGS-1509 ) 7.6. OpenShift Data Foundation console No correct labels on the worker nodes for OpenShift Data Foundation on ROSA HCP Previously, when OpenShift Data Foundation is installed in a namespace other than openshift-storage (ROSA use case), the user interface (UI) labeled the nodes during StorageSystem deployment and added a dynamic label, cluster.ocs.openshift.io/<CLUSTER_NAMESPACE>: (where, CLUSTER_NAMESPACE is the namespace where StorageSystem is getting created). Hence, ODF and OCS operators expected the label to be static and always equal to "cluster.ocs.openshift.io/openshift-storage: ''" , irrespective of where OpenShift Data Foundation is installed or StorageSystem is deployed. With this fix, UI always adds a static label "cluster.ocs.openshift.io/openshift-storage: ''" to the nodes and as a result, the install process proceeds as expected. ( DFBUGS-137 ) Tooltip rendered behind other components Previously, when graphs or charts were hovered over, tooltips were hidden behind the graphs or charts and the values were not visible (on the dashboards). This was due to the PatternFly v5 library issue. With this fix, PatternFly is updated to a minor version and as a result, tooltips are clearly visible. ( DFBUGS-156 ) BackingStore details shows incorrect provider Previously, the BackingStore details page showed incorrect provider due to the incorrect mapping of the provider name. With this fix the UI logic was updated to display the provider name correctly. ( DFBUGS-353 ) Error message that Popup fail to alert on rule Previously, OBCs could be created with the same name in different namespaces without being notified, which led to potential conflicts or unintended behavior. This was because the user interface did not track object bucket claims (OBCs) across namespaces. This allowed duplicate OBC names without a proper warning. With this fix, the validation logic is updated to properly check and notify when you attempt to create an OBC with a duplicate name. A clear warning is displayed if an OBC with the same name exists, preventing confusion and ensuring correct behavior. ( DFBUGS-410 ) A 404: Not Found message is briefly displayed for a few seconds when clicking on the 'Enable Encryption' checkbox during StorageClass creation Previously, "404: Not Found" message was briefly displayed for a few seconds while enabling encryption by using the 'Enable Encryption' checkbox during new StorageClass creation. With this fix, the conditions that caused the issue was fixed. As a result, "404: Not Found" and directly the configuration form is displayed after some loading state. DFBUGS-489 Existing warning alert "Inconsistent data on target cluster" does not go away Previously, when an incorrect target cluster is selected for failover/relocate operations, the existing warning alert "Inconsistent data on target cluster" did not disappear. With this fix, the warning alert is refreshed correctly when changing the target cluster for subscription apps. As a result, the alerts no longer persists unnecessarily when failover/relocation is triggered for discovered applications. ( DFBUGS-866 ) 7.7. Rook rook-ceph-osd-prepare-ocs-deviceset pods produce duplicate metrics Previously, alerts were raised from kube-state-metrics because of the duplicate tolerations in the OSD prepare pods. With this fix, the completed OSD prepare pods that had duplicate tolerations are removed. As a result, duplicate alerts with upgrades are no longer raised. ( DFBUGS-839 ) 7.8. Ceph monitoring Prometheus rule evaluation errors Previously, a lot of PrometheusRuleFailures error logs and the affected alerts were not triggered because many alerts or rules queries which included the metric ceph_disk_occupation had a wrong or invalid label. With this fix, the erroneous label was corrected and the queries of the affected alerts were updated. As a result, prometheus rule evaluation is appropriate and all alerts are successfully deployed. ( DFBUGS-789 ) Alert "CephMdsCPUUsageHighNeedsVerticalScaling" not triggered when MDS usage is high Previously, ocs-operator was unable to read or deploy the malformed rule file and the alerts associated with this file were not visible. This was due to the wrong indentation of the PrometheusRule file, prometheus-ocs-rule.yaml . With this fix, the indentation is corrected and as a result, the PrometheusRule file is deployed successfully. ( DFBUGS-951 )
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/4.18_release_notes/bug_fixes
Operator Guide
Operator Guide Red Hat build of Keycloak 26.0 Red Hat Customer Content Services
[ "apiVersion: apps/v1 kind: StatefulSet metadata: name: postgresql-db spec: serviceName: postgresql-db-service selector: matchLabels: app: postgresql-db replicas: 1 template: metadata: labels: app: postgresql-db spec: containers: - name: postgresql-db image: postgres:15 volumeMounts: - mountPath: /data name: cache-volume env: - name: POSTGRES_USER value: testuser - name: POSTGRES_PASSWORD value: testpassword - name: PGDATA value: /data/pgdata - name: POSTGRES_DB value: keycloak volumes: - name: cache-volume emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: postgres-db spec: selector: app: postgresql-db type: LoadBalancer ports: - port: 5432 targetPort: 5432", "apply -f example-postgres.yaml", "openssl req -subj '/CN=test.keycloak.org/O=Test Keycloak./C=US' -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem", "create secret tls example-tls-secret --cert certificate.pem --key key.pem", "create secret generic keycloak-db-secret --from-literal=username=[your_database_username] --from-literal=password=[your_database_password]", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org proxy: headers: xforwarded # double check your reverse proxy sets and overwrites the X-Forwarded-* headers", "apply -f example-kc.yaml", "get keycloaks/example-kc -o go-template='{{range .status.conditions}}CONDITION: {{.type}}{{\"\\n\"}} STATUS: {{.status}}{{\"\\n\"}} MESSAGE: {{.message}}{{\"\\n\"}}{{end}}'", "CONDITION: Ready STATUS: true MESSAGE: CONDITION: HasErrors STATUS: false MESSAGE: CONDITION: RollingUpdate STATUS: false MESSAGE:", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: className: openshift-default", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: enabled: false", "apply -f example-kc.yaml", "oc create route reencrypt --service=<keycloak-cr-name>-service --cert=<configured-certificate> --key=<certificate-key> --dest-ca-cert=<ca-certificate> --ca-cert=<ca-certificate> --hostname=<hostname>", "port-forward service/example-kc-service 8443:8443", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: proxy: headers: forwarded|xforwarded", "get secret example-kc-initial-admin -o jsonpath='{.data.username}' | base64 --decode get secret example-kc-initial-admin -o jsonpath='{.data.password}' | base64 --decode", "apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> realm:", "apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> realm: id: example-realm realm: example-realm displayName: ExampleRealm enabled: true", "apply -f example-realm-import.yaml", "get keycloakrealmimports/my-realm-kc -o go-template='{{range .status.conditions}}CONDITION: {{.type}}{{\"\\n\"}} STATUS: {{.status}}{{\"\\n\"}} MESSAGE: {{.message}}{{\"\\n\"}}{{end}}'", "CONDITION: Done STATUS: true MESSAGE: CONDITION: Started STATUS: false MESSAGE: CONDITION: HasErrors STATUS: false MESSAGE:", "apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> placeholders: ENV_KEY: secret: name: SECRET_NAME key: SECRET_KEY", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres usernameSecret: name: usernameSecret key: usernameSecretKey passwordSecret: name: passwordSecret key: passwordSecretKey host: host database: database port: 123 schema: schema poolInitialSize: 1 poolMinSize: 2 poolMaxSize: 3 http: httpEnabled: true httpPort: 8180 httpsPort: 8543 tlsSecret: my-tls-secret hostname: hostname: https://my-hostname.tld admin: https://my-hostname.tld/admin strict: false backchannelDynamic: true features: enabled: - docker - authorization disabled: - admin - step-up-authentication transaction: xaEnabled: false", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: additionalOptions: - name: spi-connections-http-client-default-connection-pool-size secret: # Secret reference name: http-client-secret # name of the Secret key: poolSize # name of the Key in the Secret - name: spi-email-template-mycustomprovider-enabled value: true # plain text value", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: unsupported: podTemplate: metadata: labels: my-label: \"keycloak\" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: keycloak-additional-secret", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: http: httpEnabled: true hostname: strict: false", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: resources: requests: cpu: 1200m memory: 896Mi limits: cpu: 6 memory: 3Gi", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: scheduling: priorityClassName: custom-high affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app: keycloak app.kubernetes.io/managed-by: keycloak-operator app.kubernetes.io/component: server topologyKey: topology.kubernetes.io/zone weight: 10 tolerations: - key: \"some-taint\" operator: \"Exists\" effect: \"NoSchedule\" topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: httpManagement: port: 9001 additionalOptions: - name: http-management-relative-path value: /management", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: truststores: my-truststore: secret: name: my-secret", "apiVersion: v1 kind: Secret metadata: name: my-secret stringData: cert.pem: | -----BEGIN CERTIFICATE-----", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 image: quay.io/my-company/my-keycloak:latest http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 image: quay.io/my-company/my-keycloak:latest startOptimized: false http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html-single/operator_guide/index
Chapter 10. Subscription [operators.coreos.com/v1alpha1]
Chapter 10. Subscription [operators.coreos.com/v1alpha1] Description Subscription keeps operators up to date by tracking changes to Catalogs. Type object Required metadata spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubscriptionSpec defines an Application that can be installed status object 10.1.1. .spec Description SubscriptionSpec defines an Application that can be installed Type object Required name source sourceNamespace Property Type Description channel string config object SubscriptionConfig contains configuration specified for a subscription. installPlanApproval string Approval is the user approval policy for an InstallPlan. It must be one of "Automatic" or "Manual". name string source string sourceNamespace string startingCSV string 10.1.2. .spec.config Description SubscriptionConfig contains configuration specified for a subscription. Type object Property Type Description affinity object If specified, overrides the pod's scheduling constraints. nil sub-attributes will not override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. annotations object (string) Annotations is an unstructured key value map stored with each Deployment, Pod, APIService in the Operator. Typically, annotations may be set by external tools to store and retrieve arbitrary metadata. Use this field to pre-define annotations that OLM should add to each of the Subscription's deployments, pods, and apiservices. env array Env is a list of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ resources object Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ selector object Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. tolerations array Tolerations are the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. volumeMounts array List of VolumeMounts to set in the container. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array List of Volumes to set in the podSpec. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 10.1.3. .spec.config.affinity Description If specified, overrides the pod's scheduling constraints. nil sub-attributes will not override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 10.1.4. .spec.config.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 10.1.5. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 10.1.6. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 10.1.7. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 10.1.8. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 10.1.9. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.10. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 10.1.11. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.12. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 10.1.13. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 10.1.14. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 10.1.15. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 10.1.16. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.17. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 10.1.18. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.19. .spec.config.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 10.1.20. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 10.1.21. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 10.1.22. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.23. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.24. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.25. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.26. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.27. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.28. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.29. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 10.1.30. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.31. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.32. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.33. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.34. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.35. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.36. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.37. .spec.config.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 10.1.38. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 10.1.39. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 10.1.40. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.41. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.42. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.43. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.44. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.45. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.46. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.47. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 10.1.48. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.49. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.50. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.51. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.52. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.53. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.54. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.55. .spec.config.env Description Env is a list of environment variables to set in the container. Cannot be updated. Type array 10.1.56. .spec.config.env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 10.1.57. .spec.config.env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 10.1.58. .spec.config.env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 10.1.59. .spec.config.env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.60. .spec.config.env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.61. .spec.config.env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 10.1.62. .spec.config.envFrom Description EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. Type array 10.1.63. .spec.config.envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 10.1.64. .spec.config.envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap must be defined 10.1.65. .spec.config.envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret must be defined 10.1.66. .spec.config.resources Description Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 10.1.67. .spec.config.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 10.1.68. .spec.config.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 10.1.69. .spec.config.selector Description Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.70. .spec.config.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.71. .spec.config.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.72. .spec.config.tolerations Description Tolerations are the pod's tolerations. Type array 10.1.73. .spec.config.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 10.1.74. .spec.config.volumeMounts Description List of VolumeMounts to set in the container. Type array 10.1.75. .spec.config.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 10.1.76. .spec.config.volumes Description List of Volumes to set in the podSpec. Type array 10.1.77. .spec.config.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 10.1.78. .spec.config.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 10.1.79. .spec.config.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 10.1.80. .spec.config.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 10.1.81. .spec.config.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 10.1.82. .spec.config.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 10.1.83. .spec.config.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 10.1.84. .spec.config.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 10.1.85. .spec.config.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean optional specify whether the ConfigMap or its keys must be defined 10.1.86. .spec.config.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.87. .spec.config.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.88. .spec.config.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 10.1.89. .spec.config.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 10.1.90. .spec.config.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 10.1.91. .spec.config.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 10.1.92. .spec.config.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 10.1.93. .spec.config.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.94. .spec.config.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.95. .spec.config.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 10.1.96. .spec.config.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 10.1.97. .spec.config.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 10.1.98. .spec.config.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 10.1.99. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 10.1.100. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 10.1.101. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 10.1.102. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 10.1.103. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.104. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.105. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.106. .spec.config.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 10.1.107. .spec.config.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 10.1.108. .spec.config.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 10.1.109. .spec.config.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 10.1.110. .spec.config.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 10.1.111. .spec.config.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 10.1.112. .spec.config.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 10.1.113. .spec.config.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 10.1.114. .spec.config.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 10.1.115. .spec.config.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 10.1.116. .spec.config.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 10.1.117. .spec.config.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 10.1.118. .spec.config.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 10.1.119. .spec.config.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 10.1.120. .spec.config.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 10.1.121. .spec.config.volumes[].projected.sources Description sources is the list of volume projections Type array 10.1.122. .spec.config.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description clusterTrustBundle object ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 10.1.123. .spec.config.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. Type object Required path Property Type Description labelSelector object Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 10.1.124. .spec.config.volumes[].projected.sources[].clusterTrustBundle.labelSelector Description Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.125. .spec.config.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.126. .spec.config.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.127. .spec.config.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean optional specify whether the ConfigMap or its keys must be defined 10.1.128. .spec.config.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.129. .spec.config.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.130. .spec.config.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 10.1.131. .spec.config.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 10.1.132. .spec.config.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 10.1.133. .spec.config.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.134. .spec.config.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.135. .spec.config.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean optional field specify whether the Secret or its key must be defined 10.1.136. .spec.config.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.137. .spec.config.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.138. .spec.config.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 10.1.139. .spec.config.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 10.1.140. .spec.config.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 10.1.141. .spec.config.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 10.1.142. .spec.config.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 10.1.143. .spec.config.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 10.1.144. .spec.config.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 10.1.145. .spec.config.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.146. .spec.config.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.147. .spec.config.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 10.1.148. .spec.config.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 10.1.149. .spec.config.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 10.1.150. .status Description Type object Required lastUpdated Property Type Description catalogHealth array CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. catalogHealth[] object SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. conditions array Conditions is a list of the latest available observations about a Subscription's current state. conditions[] object SubscriptionCondition represents the latest available observations of a Subscription's state. currentCSV string CurrentCSV is the CSV the Subscription is progressing to. installPlanGeneration integer InstallPlanGeneration is the current generation of the installplan installPlanRef object InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. installedCSV string InstalledCSV is the CSV currently installed by the Subscription. installplan object Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef lastUpdated string LastUpdated represents the last time that the Subscription status was updated. reason string Reason is the reason the Subscription was transitioned to its current state. state string State represents the current state of the Subscription 10.1.151. .status.catalogHealth Description CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. Type array 10.1.152. .status.catalogHealth[] Description SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. Type object Required catalogSourceRef healthy lastUpdated Property Type Description catalogSourceRef object CatalogSourceRef is a reference to a CatalogSource. healthy boolean Healthy is true if the CatalogSource is healthy; false otherwise. lastUpdated string LastUpdated represents the last time that the CatalogSourceHealth changed 10.1.153. .status.catalogHealth[].catalogSourceRef Description CatalogSourceRef is a reference to a CatalogSource. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.154. .status.conditions Description Conditions is a list of the latest available observations about a Subscription's current state. Type array 10.1.155. .status.conditions[] Description SubscriptionCondition represents the latest available observations of a Subscription's state. Type object Required status type Property Type Description lastHeartbeatTime string LastHeartbeatTime is the last time we got an update on a given condition lastTransitionTime string LastTransitionTime is the last time the condition transit from one status to another message string Message is a human-readable message indicating details about last transition. reason string Reason is a one-word CamelCase reason for the condition's last transition. status string Status is the status of the condition, one of True, False, Unknown. type string Type is the type of Subscription condition. 10.1.156. .status.installPlanRef Description InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.157. .status.installplan Description Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef Type object Required apiVersion kind name uuid Property Type Description apiVersion string kind string name string uuid string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. 10.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/subscriptions GET : list objects of kind Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions DELETE : delete collection of Subscription GET : list objects of kind Subscription POST : create a Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name} DELETE : delete a Subscription GET : read the specified Subscription PATCH : partially update the specified Subscription PUT : replace the specified Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name}/status GET : read status of the specified Subscription PATCH : partially update status of the specified Subscription PUT : replace status of the specified Subscription 10.2.1. /apis/operators.coreos.com/v1alpha1/subscriptions HTTP method GET Description list objects of kind Subscription Table 10.1. HTTP responses HTTP code Reponse body 200 - OK SubscriptionList schema 401 - Unauthorized Empty 10.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions HTTP method DELETE Description delete collection of Subscription Table 10.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Subscription Table 10.3. HTTP responses HTTP code Reponse body 200 - OK SubscriptionList schema 401 - Unauthorized Empty HTTP method POST Description create a Subscription Table 10.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.5. Body parameters Parameter Type Description body Subscription schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 202 - Accepted Subscription schema 401 - Unauthorized Empty 10.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name} Table 10.7. Global path parameters Parameter Type Description name string name of the Subscription HTTP method DELETE Description delete a Subscription Table 10.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Subscription Table 10.10. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Subscription Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.12. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Subscription Table 10.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.14. Body parameters Parameter Type Description body Subscription schema Table 10.15. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 401 - Unauthorized Empty 10.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name}/status Table 10.16. Global path parameters Parameter Type Description name string name of the Subscription HTTP method GET Description read status of the specified Subscription Table 10.17. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Subscription Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.19. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Subscription Table 10.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.21. Body parameters Parameter Type Description body Subscription schema Table 10.22. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operatorhub_apis/subscription-operators-coreos-com-v1alpha1
Chapter 15. Load Balancer (octavia) Parameters
Chapter 15. Load Balancer (octavia) Parameters Parameter Description OctaviaAdminLogFacility The syslog "LOG_LOCAL" facility to use for the administrative log messages. The default value is 1 . OctaviaAdminLogTargets List of syslog endpoints, host:port comma separated list, to receive administrative log messages. OctaviaAmphoraExpiryAge The interval in seconds after which an unused Amphora will be considered expired and cleaned up. If left to 0, the configuration will not be set and the system will use the service defaults. The default value is 0 . OctaviaAmphoraSshKeyDir OpenStack Load Balancing-as-a-Service (octavia) generated SSH key directory. The default value is /etc/octavia/ssh . OctaviaAmphoraSshKeyFile Public key file path. User will be able to SSH into amphorae with the provided key. User may, in most cases, also elevate to root from user centos (CentOS), ubuntu (Ubuntu) or cloud-user (RHEL) (depends on how amphora image was created). Logging in to amphorae provides a convenient way to e.g. debug load balancing services. OctaviaAmphoraSshKeyName SSH key name. The default value is octavia-ssh-key . OctaviaAntiAffinity Flag to indicate if anti-affinity feature is turned on. The default value is true . OctaviaCaCert OpenStack Load Balancing-as-a-Service (octavia) CA certificate data. If provided, this will create or update a file on the host with the path provided in OctaviaCaCertFile with the certificate data. OctaviaCaKey The private key for the certificate provided in OctaviaCaCert. If provided, this will create or update a file on the host with the path provided in OctaviaCaKeyFile with the key data. OctaviaCaKeyPassphrase CA private key passphrase. OctaviaClientCert OpenStack Load Balancing-as-a-Service (octavia) client certificate data. If provided, this will create or update a file on the host with the path provided in OctaviaClientCertFile with the certificate data. OctaviaConnectionLogging When false, tenant connection flows will not be logged. The default value is true . OctaviaDefaultListenerCiphers Default list of OpenSSL ciphers for new TLS-enabled listeners. OctaviaDefaultPoolCiphers Default list of OpenSSL ciphers for new TLS-enabled pools. OctaviaDisableLocalLogStorage When true, logs will not be stored on the amphora filesystem. This includes all kernel, system, and security logs. The default value is false . OctaviaEnableDriverAgent Set to false if the driver agent needs to be disabled for some reason. The default value is true . OctaviaEnableJobboard Enable jobboard for the amphorav2 driver, it enables flow resumption for the amphora driver. The default value is false . OctaviaFlavorId OpenStack Compute (nova) flavor ID to be used when creating the nova flavor for amphora. The default value is 65 . OctaviaForwardAllLogs When true, all log messages from the amphora will be forwarded to the administrative log endponts, including non-load balancing related logs. The default value is false . OctaviaGenerateCerts Enable internal generation of certificates for secure communication with amphorae for isolated private clouds or systems where security is not a concern. Otherwise, use OctaviaCaCert, OctaviaCaKey, OctaviaCaKeyPassphrase, OctaviaClientCert and OctaviaServerCertsKeyPassphrase to configure OpenStack Load Balancing-as-a-Service (octavia). The default value is false . OctaviaJobboardExpirationTime Expiry of claimed jobs in jobboard. The default value is 30 . OctaviaListenerTlsVersions List of OpenSSL cipher string of TLS versions to use for new TLS-enabled listeners. The default value is ['TLSv1.2', 'TLSv1.3'] . OctaviaLoadBalancerTopology Load balancer topology configuration. OctaviaLogOffload When true, log messages from the amphora will be forwarded to the administrative log endponts and will be stored with the controller logs. The default value is false . OctaviaLogOffloadProtocol The protocol to use for the RSyslog log offloading feature. The default value is udp . OctaviaMinimumTlsVersion Minimum allowed TLS version for listeners and pools. OctaviaMultiVcpuFlavorId Name of the nova flavor for the amphora with multiple vCPUs. The default value is amphora-mvcpu-ha . OctaviaMultiVcpuFlavorProperties Dictionary describing the nova flavor for amphora with active-standby topology and multiple vCPUs for vertical scaling. The default value is {'ram': '4096', 'disk': '3', 'vcpus': '4'} . OctaviaPoolTlsVersions List of TLS versions to use for new TLS-enabled pools. The default value is ['TLSv1.2', 'TLSv1.3'] . OctaviaTenantLogFacility The syslog "LOG_LOCAL" facility to use for the tenant traffic flow log messages. The default value is 0 . OctaviaTenantLogTargets List of syslog endpoints, host:port comma separated list, to receive tenant traffic flow log messages. OctaviaTimeoutClientData Frontend client inactivity timeout. The default value is 50000 . OctaviaTimeoutMemberData Backend member inactivity timeout. The default value is 50000 . OctaviaTlsCiphersProhibitList List of OpenSSL ciphers. Usage of these ciphers will be blocked. RedisPassword The password for the redis service account.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_load-balancer-octavia-parameters_overcloud_parameters
Chapter 18. Using NetworkManager to disable IPv6 for a specific connection
Chapter 18. Using NetworkManager to disable IPv6 for a specific connection On a system that uses NetworkManager to manage network interfaces, you can disable the IPv6 protocol if the network only uses IPv4. If you disable IPv6 , NetworkManager automatically sets the corresponding sysctl values in the Kernel. Note If disabling IPv6 using kernel tunables or kernel boot parameters, additional consideration must be given to system configuration. For more information, see the Red Hat Knowledgebase solution How do I disable or enable the IPv6 protocol in RHEL . 18.1. Disabling IPv6 on a connection using nmcli You can use the nmcli utility to disable the IPv6 protocol on the command line. Prerequisites The system uses NetworkManager to manage network interfaces. Procedure Optional: Display the list of network connections: Set the ipv6.method parameter of the connection to disabled : Restart the network connection: Verification Display the IP settings of the device: If no inet6 entry is displayed, IPv6 is disabled on the device. Verify that the /proc/sys/net/ipv6/conf/ enp1s0 /disable_ipv6 file now contains the value 1 : The value 1 means that IPv6 is disabled for the device.
[ "nmcli connection show NAME UUID TYPE DEVICE Example 7a7e0151-9c18-4e6f-89ee-65bb2d64d365 ethernet enp1s0", "nmcli connection modify Example ipv6.method \"disabled\"", "nmcli connection up Example", "ip address show enp1s0 2: enp1s0 : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:6b:74:be brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.10.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever", "cat /proc/sys/net/ipv6/conf/ enp1s0 /disable_ipv6 1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/using-networkmanager-to-disable-ipv6-for-a-specific-connection_configuring-and-managing-networking
Chapter 11. jndi
Chapter 11. jndi 11.1. jndi:alias 11.1.1. Description Create a JNDI alias on a given name. 11.1.2. Syntax jndi:alias [options] name alias 11.1.3. Arguments Name Description name The JNDI name alias The JNDI alias 11.1.4. Options Name Description --help Display this help message 11.2. jndi:bind 11.2.1. Description Bind an OSGi service in the JNDI context 11.2.2. Syntax jndi:bind [options] service name 11.2.3. Arguments Name Description service The ID of the OSGi service to bind name The JNDI name to bind the OSGi service 11.2.4. Options Name Description --help Display this help message 11.3. jndi:contexts 11.3.1. Description List the JNDI sub-contexts. 11.3.2. Syntax jndi:contexts [options] [context] 11.3.3. Arguments Name Description context The base JNDI context 11.3.4. Options Name Description --help Display this help message 11.4. jndi:create 11.4.1. Description Create a new JNDI sub-context. 11.4.2. Syntax jndi:create [options] context 11.4.3. Arguments Name Description context The JNDI sub-context name 11.4.4. Options Name Description --help Display this help message 11.5. jndi:delete 11.5.1. Description Delete a JNDI sub-context. 11.5.2. Syntax jndi:delete [options] context 11.5.3. Arguments Name Description context The JNDI sub-context name 11.5.4. Options Name Description --help Display this help message 11.6. jndi:names 11.6.1. Description List the JNDI names. 11.6.2. Syntax jndi:names [options] [context] 11.6.3. Arguments Name Description context The JNDI context to display the names 11.6.4. Options Name Description --help Display this help message 11.7. jndi:unbind 11.7.1. Description Unbind a JNDI name. 11.7.2. Syntax jndi:unbind [options] name 11.7.3. Arguments Name Description name The JNDI name to unbind 11.7.4. Options Name Description --help Display this help message
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/jndi
4.5. Configuring Static Routes in ifcfg files
4.5. Configuring Static Routes in ifcfg files Static routes set using ip commands at the command prompt will be lost if the system is shutdown or restarted. To configure static routes to be persistent after a system restart, they must be placed in per-interface configuration files in the /etc/sysconfig/network-scripts/ directory. The file name should be of the format route- interface . There are two types of commands to use in the configuration files: Static Routes Using the IP Command Arguments Format If required in a per-interface configuration file, for example /etc/sysconfig/network-scripts/route-enp1s0 , define a route to a default gateway on the first line. This is only required if the gateway is not set through DHCP and is not set globally in the /etc/sysconfig/network file: default via 192.168.1.1 dev interface where 192.168.1.1 is the IP address of the default gateway. The interface is the interface that is connected to, or can reach, the default gateway. The dev option can be omitted, it is optional. Note that this setting takes precedence over a setting in the /etc/sysconfig/network file. If a route to a remote network is required, a static route can be specified as follows. Each line is parsed as an individual route: 10.10.10.0/24 via 192.168.1.1 [ dev interface ] where 10.10.10.0/24 is the network address and prefix length of the remote or destination network. The address 192.168.1.1 is the IP address leading to the remote network. It is preferably the hop address but the address of the exit interface will work. The " hop " means the remote end of a link, for example a gateway or router. The dev option can be used to specify the exit interface interface but it is not required. Add as many static routes as required. The following is an example of a route- interface file using the ip command arguments format. The default gateway is 192.168.0.1 , interface enp1s0 and a leased line or WAN connection is available at 192.168.0.10 . The two static routes are for reaching the 10.10.10.0/24 network and the 172.16.1.10/32 host: In the above example, packets going to the local 192.168.0.0/24 network will be directed out the interface attached to that network. Packets going to the 10.10.10.0/24 network and 172.16.1.10/32 host will be directed to 192.168.0.10 . Packets to unknown, remote, networks will use the default gateway therefore static routes should only be configured for remote networks or hosts if the default route is not suitable. Remote in this context means any networks or hosts that are not directly attached to the system. For IPv6 configuration, an example of a route6- interface file in ip route format: Specifying an exit interface is optional. It can be useful if you want to force traffic out of a specific interface. For example, in the case of a VPN, you can force traffic to a remote network to pass through a tun0 interface even when the interface is in a different subnet to the destination network. The ip route format can be used to specify a source address. For example: To define an existing policy-based routing configuration, which specifies multiple routing tables, see Section 4.5.1, "Understanding Policy-routing" . Important If the default gateway is already assigned by DHCP and if the same gateway with the same metric is specified in a configuration file, an error during start-up, or when bringing up an interface, will occur. The follow error message may be shown: "RTNETLINK answers: File exists". This error may be ignored. Static Routes Using the Network/Netmask Directives Format You can also use the network/netmask directives format for route- interface files. The following is a template for the network/netmask format, with instructions following afterwards: ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.1.1 ADDRESS0= 10.10.10.0 is the network address of the remote network or host to be reached. NETMASK0= 255.255.255.0 is the netmask for the network address defined with ADDRESS0= 10.10.10.0 . GATEWAY0= 192.168.1.1 is the default gateway, or an IP address that can be used to reach ADDRESS0= 10.10.10.0 The following is an example of a route- interface file using the network/netmask directives format. The default gateway is 192.168.0.1 but a leased line or WAN connection is available at 192.168.0.10 . The two static routes are for reaching the 10.10.10.0/24 and 172.16.1.0/24 networks: ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.0.10 ADDRESS1=172.16.1.10 NETMASK1=255.255.255.0 GATEWAY1=192.168.0.10 Subsequent static routes must be numbered sequentially, and must not skip any values. For example, ADDRESS0 , ADDRESS1 , ADDRESS2 , and so on. By default, forwarding packets from one interface to another, or out of the same interface, is disabled for security reasons. This prevents the system acting as a router for external traffic. If you need the system to route external traffic, such as when sharing a connection or configuring a VPN server, you will need to enable IP forwarding. See the Red Hat Enterprise Linux 7 Security Guide for more details. 4.5.1. Understanding Policy-routing Policy-routing also known as source-routing, is a mechanism for more flexible routing configurations. Routing decisions are commonly made based on the destination IP address of a package. Policy-routing allows more flexibility to select routes based on other routing properties, such as source IP address, source port, protocol type. Routing tables stores route information about networks. They are identified by either numeric values or names, which can be configured in the /etc/iproute2/rt_tables file. The default table is identified with 254 . Using policy-routing , you also need rules. Rules are used to select a routing table, based on certain properties of packets. For initscripts, the routing table is a property of the route that can be configured through the table argument. The ip route format can be used to define an existing policy-based routing configuration, which specifies multiple routing tables: To specify routing rules in initscripts, edit them to the /etc/sysconfig/network-scripts/rule- enp1s0 file for IPv4 or to the /etc/sysconfig/network-scripts/rule6- enp1s0 file for IPv6 . NetworkManager supports policy-routing, but rules are not supported yet. The rules must be configured by the user running a custom script. For each manual static route, a routing table can be selected: ipv4.route-table for IPv4 and ipv6.route-table for IPv6 . By setting routes to a particular table, all routes from DHCP , autoconf6 , DHCP6 are placed in that specific table. In addition, all routes for subnets that have already configured addresses, are placed in the corresponding routing table. For example, if you configure the 192.168.1.10/24 address, the 192.168.1.0/24 subnet is contained in ipv4.route-table. For more details about policy-routing rules, see the ip-rule(8) man page. For routing tables, see the ip-route(8) man page.
[ "default via 192.168.0.1 dev enp1s0 10.10.10.0/24 via 192.168.0.10 dev enp1s0 172.16.1.10/32 via 192.168.0.10 dev enp1s0", "2001:db8:1::/48 via 2001:db8::1 metric 2048 2001:db8:2::/48", "10.10.10.0/24 via 192.168.0.10 src 192.168.0.2", "ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.1.1", "ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.0.10 ADDRESS1=172.16.1.10 NETMASK1=255.255.255.0 GATEWAY1=192.168.0.10", "10.10.10.0/24 via 192.168.0.10 table 1 10.10.10.0/24 via 192.168.0.10 table 2" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_static_routes_in_ifcfg_files
Chapter 6. PriorityClass [scheduling.k8s.io/v1]
Chapter 6. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object Required value 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string description is an arbitrary string that usually provides guidelines on when this priority class should be used. globalDefault boolean globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as globalDefault . However, if more than one PriorityClasses exists with their globalDefault field set to true, the smallest value of such global default PriorityClasses will be used as the default priority. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. value integer The value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec. 6.2. API endpoints The following API endpoints are available: /apis/scheduling.k8s.io/v1/priorityclasses DELETE : delete collection of PriorityClass GET : list or watch objects of kind PriorityClass POST : create a PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses GET : watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/scheduling.k8s.io/v1/priorityclasses/{name} DELETE : delete a PriorityClass GET : read the specified PriorityClass PATCH : partially update the specified PriorityClass PUT : replace the specified PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} GET : watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/scheduling.k8s.io/v1/priorityclasses Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PriorityClass Table 6.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 6.3. Body parameters Parameter Type Description body DeleteOptions schema Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityClass Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK PriorityClassList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityClass Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.8. Body parameters Parameter Type Description body PriorityClass schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 202 - Accepted PriorityClass schema 401 - Unauthorized Empty 6.2.2. /apis/scheduling.k8s.io/v1/watch/priorityclasses Table 6.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/scheduling.k8s.io/v1/priorityclasses/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the PriorityClass Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PriorityClass Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityClass Table 6.17. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityClass Table 6.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.19. Body parameters Parameter Type Description body Patch schema Table 6.20. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityClass Table 6.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.22. Body parameters Parameter Type Description body PriorityClass schema Table 6.23. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty 6.2.4. /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} Table 6.24. Global path parameters Parameter Type Description name string name of the PriorityClass Table 6.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/schedule_and_quota_apis/priorityclass-scheduling-k8s-io-v1
Chapter 5. Managing Policies
Chapter 5. Managing Policies As mentioned previously, policies define the conditions that must be satisfied before granting access to an object. You can view all policies associated with a resource server by clicking the Policy tab when editing a resource server. Policies On this tab, you can view the list of previously created policies as well as create and edit a policy. To create a new policy, in the upper right corner of the policy list, select a policy type from the Create policy dropdown list. Details about each policy type are described in this section. 5.1. User-Based Policy You can use this type of policy to define conditions for your permissions where a set of one or more users is permitted to access an object. To create a new user-based policy, select User in the dropdown list in the upper right corner of the policy listing. Add a User-Based Policy 5.1.1. Configuration Name A human-readable and unique string identifying the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Users Specifies which users are given access by this policy. Logic The Logic of this policy to apply after the other conditions have been evaluated. 5.2. Role-Based Policy You can use this type of policy to define conditions for your permissions where a set of one or more roles is permitted to access an object. By default, roles added to this policy are not specified as required and the policy will grant access if the user requesting access has been granted any of these roles. However, you can specify a specific role as required if you want to enforce a specific role. You can also combine required and non-required roles, regardless of whether they are realm or client roles. Role policies can be useful when you need more restricted role-based access control (RBAC), where specific roles must be enforced to grant access to an object. For instance, you can enforce that a user must consent to allowing a client application (which is acting on the user's behalf) to access the user's resources. You can use Red Hat Single Sign-On Client Scope Mapping to enable consent pages or even enforce clients to explicitly provide a scope when obtaining access tokens from a Red Hat Single Sign-On server. To create a new role-based policy, select Role in the dropdown list in the upper right corner of the policy listing. Add Role-Based Policy 5.2.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Realm Roles Specifies which realm roles are permitted by this policy. Client Roles Specifies which client roles are permitted by this policy. To enable this field must first select a Client . Logic The Logic of this policy to apply after the other conditions have been evaluated. 5.2.2. Defining a Role as Required When creating a role-based policy, you can specify a specific role as Required . When you do that, the policy will grant access only if the user requesting access has been granted all the required roles. Both realm and client roles can be configured as such. Example of Required Role To specify a role as required, select the Required checkbox for the role you want to configure as required. Required roles can be useful when your policy defines multiple roles but only a subset of them are mandatory. In this case, you can combine realm and client roles to enable an even more fine-grained role-based access control (RBAC) model for your application. For example, you can have policies specific for a client and require a specific client role associated with that client. Or you can enforce that access is granted only in the presence of a specific realm role. You can also combine both approaches within the same policy. 5.3. JavaScript-Based Policy You can use this type of policy to define conditions for your permissions using JavaScript. It is one of the rule-based policy types supported by Red Hat Single Sign-On, and provides flexibility to write any policy based on the Evaluation API . To create a new JavaScript-based policy, select JavaScript in the dropdown list in the upper right corner of the policy listing. Note By default, JavaScript Policies can not be uploaded to the server. You should prefer deploying your JS Policies directly to the server as described in JavaScript Providers . If you still want to use the Red Hat Single Sign-On Administration Console to manage your JS policies you should enable the Upload Scripts feature. Add JavaScript Policy 5.3.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Code The JavaScript code providing the conditions for this policy. Logic The Logic of this policy to apply after the other conditions have been evaluated. 5.3.2. Creating a JS Policy from a Deployed JAR File Red Hat Single Sign-On allows you to deploy a JAR file in order to deploy scripts to the server. Please, take a look at JavaScript Providers for more details. Once you have your scripts deployed, you should be able to select the scripts you deployed from the list of available policy providers. 5.3.3. Examples 5.3.3.1. Checking for attributes from the evaluation context Here is a simple example of a JavaScript-based policy that uses attribute-based access control (ABAC) to define a condition based on an attribute obtained from the execution context: var context = USDevaluation.getContext(); var contextAttributes = context.getAttributes(); if (contextAttributes.containsValue('kc.client.network.ip_address', '127.0.0.1')) { USDevaluation.grant(); } 5.3.3.2. Checking for attributes from the current identity Here is a simple example of a JavaScript-based policy that uses attribute-based access control (ABAC) to define a condition based on an attribute obtained associated with the current identity: var context = USDevaluation.getContext(); var identity = context.getIdentity(); var attributes = identity.getAttributes(); var email = attributes.getValue('email').asString(0); if (email.endsWith('@keycloak.org')) { USDevaluation.grant(); } Where these attributes are mapped from whatever claim is defined in the token that was used in the authorization request. 5.3.3.3. Checking for roles granted to the current identity You can also use Role-Based Access Control (RBAC) in your policies. In the example below, we check if a user is granted with a keycloak_user realm role: var context = USDevaluation.getContext(); var identity = context.getIdentity(); if (identity.hasRealmRole('keycloak_user')) { USDevaluation.grant(); } Or you can check if a user is granted with a my-client-role client role, where my-client is the client id of the client application: var context = USDevaluation.getContext(); var identity = context.getIdentity(); if (identity.hasClientRole('my-client', 'my-client-role')) { USDevaluation.grant(); } 5.3.3.4. Checking for roles granted to an user To check for realm roles granted to an user: var realm = USDevaluation.getRealm(); if (realm.isUserInRealmRole('marta', 'role-a')) { USDevaluation.grant(); } Or for client roles granted to an user: var realm = USDevaluation.getRealm(); if (realm.isUserInClientRole('marta', 'my-client', 'some-client-role')) { USDevaluation.grant(); } 5.3.3.5. Checking for roles granted to a group To check for realm roles granted to a group: var realm = USDevaluation.getRealm(); if (realm.isGroupInRole('/Group A/Group D', 'role-a')) { USDevaluation.grant(); } 5.3.3.6. Pushing arbitrary claims to the resource server To push arbitrary claims to the resource server in order to provide additional information on how permissions should be enforced: var permission = USDevaluation.getPermission(); // decide if permission should be granted if (granted) { permission.addClaim('claim-a', 'claim-a'); permission.addClaim('claim-a', 'claim-a1'); permission.addClaim('claim-b', 'claim-b'); } 5.3.3.7. Checking for group membership var realm = USDevaluation.getRealm(); if (realm.isUserInGroup('marta', '/Group A/Group B')) { USDevaluation.grant(); } 5.3.3.8. Mixing different access control mechanisms You can also use a combination of several access control mechanisms. The example below shows how roles(RBAC) and claims/attributes(ABAC) checks can be used within the same policy. In this case we check if user is granted with admin role or has an e-mail from keycloak.org domain: var context = USDevaluation.getContext(); var identity = context.getIdentity(); var attributes = identity.getAttributes(); var email = attributes.getValue('email').asString(0); if (identity.hasRealmRole('admin') || email.endsWith('@keycloak.org')) { USDevaluation.grant(); } Note When writing your own rules, keep in mind that the USDevaluation object is an object implementing org.keycloak.authorization.policy.evaluation.Evaluation . For more information about what you can access from this interface, see the Evaluation API . 5.4. Time-Based Policy You can use this type of policy to define time conditions for your permissions. To create a new time-based policy, select Time in the dropdown list in the upper right corner of the policy listing. Add Time Policy 5.4.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Not Before Defines the time before which access must not be granted. Permission is granted only if the current date/time is later than or equal to this value. Not On or After Defines the time after which access must not be granted. Permission is granted only if the current date/time is earlier than or equal to this value. Day of Month Defines the day of month that access must be granted. You can also specify a range of dates. In this case, permission is granted only if the current day of the month is between or equal to the two values specified. Month Defines the month that access must be granted. You can also specify a range of months. In this case, permission is granted only if the current month is between or equal to the two values specified. Year Defines the year that access must be granted. You can also specify a range of years. In this case, permission is granted only if the current year is between or equal to the two values specified. Hour Defines the hour that access must be granted. You can also specify a range of hours. In this case, permission is granted only if current hour is between or equal to the two values specified. Minute Defines the minute that access must be granted. You can also specify a range of minutes. In this case, permission is granted only if the current minute is between or equal to the two values specified. Logic The Logic of this policy to apply after the other conditions have been evaluated. Access is only granted if all conditions are satisfied. Red Hat Single Sign-On will perform an AND based on the outcome of each condition. 5.5. Aggregated Policy As mentioned previously, Red Hat Single Sign-On allows you to build a policy of policies, a concept referred to as policy aggregation. You can use policy aggregation to reuse existing policies to build more complex ones and keep your permissions even more decoupled from the policies that are evaluated during the processing of authorization requests. To create a new aggregated policy, select Aggregated in the dropdown list located in the right upper corner of the policy listing. Add an Aggregated Policy Let's suppose you have a resource called Confidential Resource that can be accessed only by users from the keycloak.org domain and from a certain range of IP addresses. You can create a single policy with both conditions. However, you want to reuse the domain part of this policy to apply to permissions that operates regardless of the originating network. You can create separate policies for both domain and network conditions and create a third policy based on the combination of these two policies. With an aggregated policy, you can freely combine other policies and then apply the new aggregated policy to any permission you want. Note When creating aggregated policies, be mindful that you are not introducing a circular reference or dependency between policies. If a circular dependency is detected, you cannot create or update the policy. 5.5.1. Configuration Name A human-readable and unique string describing the policy. We strongly suggest that you use names that are closely related with your business and security requirements, so you can identify them more easily and also know what they mean. Description A string with more details about this policy. Apply Policy Defines a set of one or more policies to associate with the aggregated policy. To associate a policy you can either select an existing policy or create a new one by selecting the type of the policy you want to create. Decision Strategy The decision strategy for this permission. Logic The Logic of this policy to apply after the other conditions have been evaluated. 5.5.2. Decision Strategy for Aggregated Policies When creating aggregated policies, you can also define the decision strategy that will be used to determine the final decision based on the outcome from each policy. Unanimous The default strategy if none is provided. In this case, all policies must evaluate to a positive decision for the final decision to be also positive. Affirmative In this case, at least one policy must evaluate to a positive decision in order for the final decision to be also positive. Consensus In this case, the number of positive decisions must be greater than the number of negative decisions. If the number of positive and negative decisions is the same, the final decision will be negative. 5.6. Client-Based Policy You can use this type of policy to define conditions for your permissions where a set of one or more clients is permitted to access an object. To create a new client-based policy, select Client in the dropdown list in the upper right corner of the policy listing. Add a Client-Based Policy 5.6.1. Configuration Name A human-readable and unique string identifying the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Clients Specifies which clients are given access by this policy. Logic The Logic of this policy to apply after the other conditions have been evaluated. 5.7. Group-Based Policy You can use this type of policy to define conditions for your permissions where a set of one or more groups (and their hierarchies) is permitted to access an object. To create a new group-based policy, select Group in the dropdown list in the upper right corner of the policy listing. Add Group-Based Policy 5.7.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Groups Claim Specifies the name of the claim in the token holding the group names and/or paths. Usually, authorization requests are processed based on an ID Token or Access Token previously issued to a client acting on behalf of some user. If defined, the token must include a claim from where this policy is going to obtain the groups the user is a member of. If not defined, user's groups are obtained from your realm configuration. Groups Allows you to select the groups that should be enforced by this policy when evaluating permissions. After adding a group, you can extend access to children of the group by marking the checkbox Extend to Children . If left unmarked, access restrictions only applies to the selected group. Logic The Logic of this policy to apply after the other conditions have been evaluated. 5.7.2. Extending Access to Child Groups By default, when you add a group to this policy, access restrictions will only apply to members of the selected group. Under some circumstances, it might be necessary to allow access not only to the group itself but to any child group in the hierarchy. For any group added you can mark a checkbox Extend to Children in order to extend access to child groups. Extending Access to Child Groups In the example above, the policy is granting access for any user member of IT or any of its children. 5.8. Positive and Negative Logic Policies can be configured with positive or negative logic. Briefly, you can use this option to define whether the policy result should be kept as it is or be negated. For example, suppose you want to create a policy where only users not granted with a specific role should be given access. In this case, you can create a role-based policy using that role and set its Logic field to Negative . If you keep Positive , which is the default behavior, the policy result will be kept as it is. 5.9. Policy Evaluation API When writing rule-based policies using JavaScript, Red Hat Single Sign-On provides an Evaluation API that provides useful information to help determine whether a permission should be granted. This API consists of a few interfaces that provide you access to information, such as The permission being evaluated, representing both the resource and scopes being requested. The attributes associated with the resource being requested Runtime environment and any other attribute associated with the execution context Information about users such as group membership and roles The main interface is org.keycloak.authorization.policy.evaluation.Evaluation , which defines the following contract: public interface Evaluation { /** * Returns the {@link ResourcePermission} to be evaluated. * * @return the permission to be evaluated */ ResourcePermission getPermission(); /** * Returns the {@link EvaluationContext}. Which provides access to the whole evaluation runtime context. * * @return the evaluation context */ EvaluationContext getContext(); /** * Returns a {@link Realm} that can be used by policies to query information. * * @return a {@link Realm} instance */ Realm getRealm(); /** * Grants the requested permission to the caller. */ void grant(); /** * Denies the requested permission. */ void deny(); } When processing an authorization request, Red Hat Single Sign-On creates an Evaluation instance before evaluating any policy. This instance is then passed to each policy to determine whether access is GRANT or DENY . Policies determine this by invoking the grant() or deny() methods on an Evaluation instance. By default, the state of the Evaluation instance is denied, which means that your policies must explicitly invoke the grant() method to indicate to the policy evaluation engine that permission should be granted. For more information about the Evaluation API see the JavaDocs . 5.9.1. The Evaluation Context The evaluation context provides useful information to policies during their evaluation. public interface EvaluationContext { /** * Returns the {@link Identity} that represents an entity (person or non-person) to which the permissions must be granted, or not. * * @return the identity to which the permissions must be granted, or not */ Identity getIdentity(); /** * Returns all attributes within the current execution and runtime environment. * * @return the attributes within the current execution and runtime environment */ Attributes getAttributes(); } From this interface, policies can obtain: The authenticated Identity Information about the execution context and runtime environment The Identity is built based on the OAuth2 Access Token that was sent along with the authorization request, and this construct has access to all claims extracted from the original token. For example, if you are using a Protocol Mapper to include a custom claim in an OAuth2 Access Token you can also access this claim from a policy and use it to build your conditions. The EvaluationContext also gives you access to attributes related to both the execution and runtime environments. For now, there only a few built-in attributes. Table 5.1. Execution and Runtime Attributes Name Description Type kc.time.date_time Current date and time String. Format MM/dd/yyyy hh:mm:ss kc.client.network.ip_address IPv4 address of the client String kc.client.network.host Client's host name String kc.client.id The client id String kc.client.user_agent The value of the 'User-Agent' HTTP header String[] kc.realm.name The name of the realm String
[ "var context = USDevaluation.getContext(); var contextAttributes = context.getAttributes(); if (contextAttributes.containsValue('kc.client.network.ip_address', '127.0.0.1')) { USDevaluation.grant(); }", "var context = USDevaluation.getContext(); var identity = context.getIdentity(); var attributes = identity.getAttributes(); var email = attributes.getValue('email').asString(0); if (email.endsWith('@keycloak.org')) { USDevaluation.grant(); }", "var context = USDevaluation.getContext(); var identity = context.getIdentity(); if (identity.hasRealmRole('keycloak_user')) { USDevaluation.grant(); }", "var context = USDevaluation.getContext(); var identity = context.getIdentity(); if (identity.hasClientRole('my-client', 'my-client-role')) { USDevaluation.grant(); }", "var realm = USDevaluation.getRealm(); if (realm.isUserInRealmRole('marta', 'role-a')) { USDevaluation.grant(); }", "var realm = USDevaluation.getRealm(); if (realm.isUserInClientRole('marta', 'my-client', 'some-client-role')) { USDevaluation.grant(); }", "var realm = USDevaluation.getRealm(); if (realm.isGroupInRole('/Group A/Group D', 'role-a')) { USDevaluation.grant(); }", "var permission = USDevaluation.getPermission(); // decide if permission should be granted if (granted) { permission.addClaim('claim-a', 'claim-a'); permission.addClaim('claim-a', 'claim-a1'); permission.addClaim('claim-b', 'claim-b'); }", "var realm = USDevaluation.getRealm(); if (realm.isUserInGroup('marta', '/Group A/Group B')) { USDevaluation.grant(); }", "var context = USDevaluation.getContext(); var identity = context.getIdentity(); var attributes = identity.getAttributes(); var email = attributes.getValue('email').asString(0); if (identity.hasRealmRole('admin') || email.endsWith('@keycloak.org')) { USDevaluation.grant(); }", "public interface Evaluation { /** * Returns the {@link ResourcePermission} to be evaluated. * * @return the permission to be evaluated */ ResourcePermission getPermission(); /** * Returns the {@link EvaluationContext}. Which provides access to the whole evaluation runtime context. * * @return the evaluation context */ EvaluationContext getContext(); /** * Returns a {@link Realm} that can be used by policies to query information. * * @return a {@link Realm} instance */ Realm getRealm(); /** * Grants the requested permission to the caller. */ void grant(); /** * Denies the requested permission. */ void deny(); }", "public interface EvaluationContext { /** * Returns the {@link Identity} that represents an entity (person or non-person) to which the permissions must be granted, or not. * * @return the identity to which the permissions must be granted, or not */ Identity getIdentity(); /** * Returns all attributes within the current execution and runtime environment. * * @return the attributes within the current execution and runtime environment */ Attributes getAttributes(); }" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/authorization_services_guide/policy_overview
Chapter 5. Remediating nodes with Machine Health Checks
Chapter 5. Remediating nodes with Machine Health Checks Machine health checks automatically repair unhealthy machines in a particular machine pool. 5.1. About machine health checks Note You can only apply a machine health check to control plane machines on clusters that use control plane machine sets. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 5.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . 5.2. Configuring machine health checks to use the Self Node Remediation Operator Use the following procedure to configure the worker or control-plane machine health checks to use the Self Node Remediation Operator as a remediation provider. Note To use the Self Node Remediation Operator as a remediation provider for machine health checks, a machine must have an associated node in the cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SelfNodeRemediationTemplate CR: Define the SelfNodeRemediationTemplate CR: apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationTemplate metadata: namespace: openshift-machine-api name: selfnoderemediationtemplate-sample spec: template: spec: remediationStrategy: Automatic 1 1 Specifies the remediation strategy. The default remediation strategy is Automatic . To create the SelfNodeRemediationTemplate CR, run the following command: USD oc create -f <snrt-name>.yaml Create or update the MachineHealthCheck CR to point to the SelfNodeRemediationTemplate CR: Define or update the MachineHealthCheck CR: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: machine-health-check namespace: openshift-machine-api spec: selector: matchLabels: 1 machine.openshift.io/cluster-api-machine-role: "worker" machine.openshift.io/cluster-api-machine-type: "worker" unhealthyConditions: - type: "Ready" timeout: "300s" status: "False" - type: "Ready" timeout: "300s" status: "Unknown" maxUnhealthy: "40%" nodeStartupTimeout: "10m" remediationTemplate: 2 kind: SelfNodeRemediationTemplate apiVersion: self-node-remediation.medik8s.io/v1alpha1 name: selfnoderemediationtemplate-sample 1 Selects whether the machine health check is for worker or control-plane nodes. The label can also be user-defined. 2 Specifies the details for the remediation template. To create a MachineHealthCheck CR, run the following command: USD oc create -f <mhc-name>.yaml To update a MachineHealthCheck CR, run the following command: USD oc apply -f <mhc-name>.yaml
[ "apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationTemplate metadata: namespace: openshift-machine-api name: selfnoderemediationtemplate-sample spec: template: spec: remediationStrategy: Automatic 1", "oc create -f <snrt-name>.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: machine-health-check namespace: openshift-machine-api spec: selector: matchLabels: 1 machine.openshift.io/cluster-api-machine-role: \"worker\" machine.openshift.io/cluster-api-machine-type: \"worker\" unhealthyConditions: - type: \"Ready\" timeout: \"300s\" status: \"False\" - type: \"Ready\" timeout: \"300s\" status: \"Unknown\" maxUnhealthy: \"40%\" nodeStartupTimeout: \"10m\" remediationTemplate: 2 kind: SelfNodeRemediationTemplate apiVersion: self-node-remediation.medik8s.io/v1alpha1 name: selfnoderemediationtemplate-sample", "oc create -f <mhc-name>.yaml", "oc apply -f <mhc-name>.yaml" ]
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.4/html/remediation_fencing_and_maintenance/machine-health-checks
Chapter 12. Monitoring application health by using health checks
Chapter 12. Monitoring application health by using health checks In software systems, components can become unhealthy due to transient issues such as temporary connectivity loss, configuration errors, or problems with external dependencies. OpenShift Container Platform applications have a number of options to detect and handle unhealthy containers. 12.1. Understanding health checks A health check periodically performs diagnostics on a running container using any combination of the readiness, liveness, and startup health checks. You can include one or more probes in the specification for the pod that contains the container which you want to perform the health checks. Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Readiness probe A readiness probe determines if a container is ready to accept service requests. If the readiness probe fails for a container, the kubelet removes the pod from the list of available service endpoints. After a failure, the probe continues to examine the pod. If the pod becomes available, the kubelet adds the pod to the list of available service endpoints. Liveness health check A liveness probe determines if a container is still running. If the liveness probe fails due to a condition such as a deadlock, the kubelet kills the container. The pod then responds based on its restart policy. For example, a liveness probe on a pod with a restartPolicy of Always or OnFailure kills and restarts the container. Startup probe A startup probe indicates whether the application within a container is started. All other probes are disabled until the startup succeeds. If the startup probe does not succeed within a specified time period, the kubelet kills the container, and the container is subject to the pod restartPolicy . Some applications can require additional startup time on their first initialization. You can use a startup probe with a liveness or readiness probe to delay that probe long enough to handle lengthy start-up time using the failureThreshold and periodSeconds parameters. For example, you can add a startup probe, with a failureThreshold of 30 failures and a periodSeconds of 10 seconds (30 * 10s = 300s) for a maximum of 5 minutes, to a liveness probe. After the startup probe succeeds the first time, the liveness probe takes over. You can configure liveness, readiness, and startup probes with any of the following types of tests: HTTP GET : When using an HTTP GET test, the test determines the healthiness of the container by using a web hook. The test is successful if the HTTP response code is between 200 and 399 . You can use an HTTP GET test with applications that return HTTP status codes when completely initialized. Container Command: When using a container command test, the probe executes a command inside the container. The probe is successful if the test exits with a 0 status. TCP socket: When using a TCP socket test, the probe attempts to open a socket to the container. The container is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. You can configure several fields to control the behavior of a probe: initialDelaySeconds : The time, in seconds, after the container starts before the probe can be scheduled. The default is 0. periodSeconds : The delay, in seconds, between performing probes. The default is 10 . This value must be greater than timeoutSeconds . timeoutSeconds : The number of seconds of inactivity after which the probe times out and the container is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . successThreshold : The number of times that the probe must report success after a failure to reset the container status to successful. The value must be 1 for a liveness probe. The default is 1 . failureThreshold : The number of times that the probe is allowed to fail. The default is 3. After the specified attempts: for a liveness probe, the container is restarted for a readiness probe, the pod is marked Unready for a startup probe, the container is killed and is subject to the pod's restartPolicy Example probes The following are samples of different probes as they would appear in an object specification. Sample readiness probe with a container command readiness probe in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy ... 1 The container name. 2 The container image to deploy. 3 A readiness probe. 4 A container command test. 5 The commands to execute on the container. Sample container command startup probe and liveness probe with container command tests in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11 ... 1 The container name. 2 Specify the container image to deploy. 3 A liveness probe. 4 An HTTP GET test. 5 The internet scheme: HTTP or HTTPS . The default value is HTTP . 6 The port on which the container is listening. 7 A startup probe. 8 An HTTP GET test. 9 The port on which the container is listening. 10 The number of times to try the probe after a failure. 11 The number of seconds to perform the probe. Sample liveness probe with a container command test that uses a timeout in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8 ... 1 The container name. 2 Specify the container image to deploy. 3 The liveness probe. 4 The type of probe, here a container command probe. 5 The command line to execute inside the container. 6 How often in seconds to perform the probe. 7 The number of consecutive successes needed to show success after a failure. 8 The number of times to try the probe after a failure. Sample readiness probe and liveness probe with a TCP socket test in a deployment kind: Deployment apiVersion: apps/v1 ... spec: ... template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 ... 1 The readiness probe. 2 The liveness probe. 12.2. Configuring health checks using the CLI To configure readiness, liveness, and startup probes, add one or more probes to the specification for the pod that contains the container which you want to perform the health checks Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Procedure To add probes for a container: Create a Pod object to add one or more probes: apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19 1 Specify the container name. 2 Specify the container image to deploy. 3 Optional: Create a Liveness probe. 4 Specify a test to perform, here a TCP Socket test. 5 Specify the port on which the container is listening. 6 Specify the time, in seconds, after the container starts before the probe can be scheduled. 7 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 8 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . 9 Optional: Create a Readiness probe. 10 Specify the type of test to perform, here an HTTP test. 11 Specify a host IP address. When host is not defined, the PodIP is used. 12 Specify HTTP or HTTPS . When scheme is not defined, the HTTP scheme is used. 13 Specify the port on which the container is listening. 14 Optional: Create a Startup probe. 15 Specify the type of test to perform, here an Container Execution probe. 16 Specify the commands to execute on the container. 17 Specify the number of times to try the probe after a failure. 18 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 19 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . Note If the initialDelaySeconds value is lower than the periodSeconds value, the first Readiness probe occurs at some point between the two periods due to an issue with timers. The timeoutSeconds value must be lower than the periodSeconds value. Create the Pod object: USD oc create -f <file-name>.yaml Verify the state of the health check pod: USD oc describe pod health-check Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image "registry.k8s.io/liveness" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image "registry.k8s.io/liveness" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container The following is the output of a failed probe that restarted a container: Sample Liveness check output with unhealthy container USD oc describe pod pod1 Example output .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image "registry.k8s.io/liveness" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 244.116568ms 12.3. Monitoring application health using the Developer perspective You can use the Developer perspective to add three types of health probes to your container to ensure that your application is healthy: Use the Readiness probe to check if the container is ready to handle requests. Use the Liveness probe to check if the container is running. Use the Startup probe to check if the application within the container has started. You can add health checks either while creating and deploying an application, or after you have deployed an application. 12.4. Adding health checks using the Developer perspective You can use the Topology view to add health checks to your deployed application. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. Procedure In the Topology view, click on the application node to see the side panel. If the container does not have health checks added to ensure the smooth running of your application, a Health Checks notification is displayed with a link to add health checks. In the displayed notification, click the Add Health Checks link. Alternatively, you can also click the Actions drop-down list and select Add Health Checks . Note that if the container already has health checks, you will see the Edit Health Checks option instead of the add option. In the Add Health Checks form, if you have deployed multiple containers, use the Container drop-down list to ensure that the appropriate container is selected. Click the required health probe links to add them to the container. Default data for the health checks is prepopulated. You can add the probes with the default data or further customize the values and then add them. For example, to add a Readiness probe that checks if your container is ready to handle requests: Click Add Readiness Probe , to see a form containing the parameters for the probe. Click the Type drop-down list to select the request type you want to add. For example, in this case, select Container Command to select the command that will be executed inside the container. In the Command field, add an argument cat , similarly, you can add multiple arguments for the check, for example, add another argument /tmp/healthy . Retain or modify the default values for the other parameters as required. Note The Timeout value must be lower than the Period value. The Timeout default value is 1 . The Period default value is 10 . Click the check mark at the bottom of the form. The Readiness Probe Added message is displayed. Click Add to add the health check. You are redirected to the Topology view and the container is restarted. In the side panel, verify that the probes have been added by clicking on the deployed pod under the Pods section. In the Pod Details page, click the listed container in the Containers section. In the Container Details page, verify that the Readiness probe - Exec Command cat /tmp/healthy has been added to the container. 12.5. Editing health checks using the Developer perspective You can use the Topology view to edit health checks added to your application, modify them, or add more health checks. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, right-click your application and select Edit Health Checks . Alternatively, in the side panel, click the Actions drop-down list and select Edit Health Checks . In the Edit Health Checks page: To remove a previously added health probe, click the minus sign adjoining it. To edit the parameters of an existing probe: Click the Edit Probe link to a previously added probe to see the parameters for the probe. Modify the parameters as required, and click the check mark to save your changes. To add a new health probe, in addition to existing health checks, click the add probe links. For example, to add a Liveness probe that checks if your container is running: Click Add Liveness Probe , to see a form containing the parameters for the probe. Edit the probe parameters as required. Note The Timeout value must be lower than the Period value. The Timeout default value is 1 . The Period default value is 10 . Click the check mark at the bottom of the form. The Liveness Probe Added message is displayed. Click Save to save your modifications and add the additional probes to your container. You are redirected to the Topology view. In the side panel, verify that the probes have been added by clicking on the deployed pod under the Pods section. In the Pod Details page, click the listed container in the Containers section. In the Container Details page, verify that the Liveness probe - HTTP Get 10.129.4.65:8080/ has been added to the container, in addition to the earlier existing probes. 12.6. Monitoring health check failures using the Developer perspective In case an application health check fails, you can use the Topology view to monitor these health check violations. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, click on the application node to see the side panel. Click the Observe tab to see the health check failures in the Events (Warning) section. Click the down arrow adjoining Events (Warning) to see the details of the health check failure. Additional resources For details on switching to the Developer perspective in the web console, see About the Developer perspective . For details on adding health checks while creating and deploying an application, see Advanced Options in the Creating applications using the Developer perspective section.
[ "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8", "kind: Deployment apiVersion: apps/v1 spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19", "oc create -f <file-name>.yaml", "oc describe pod health-check", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container", "oc describe pod pod1", ". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/building_applications/application-health
Appendix A. Component Versions
Appendix A. Component Versions This appendix is a list of components and their versions in the Red Hat Enterprise Linux 6.4 release. Table A.1. Component Versions Component Version Kernel 2.6.32-358 QLogic qla2xxx driver 8.04.00.08.06.4-k QLogic ql2xxx firmware ql23xx-firmware-3.03.27-3.1 ql2100-firmware-1.19.38-3.1 ql2200-firmware-2.02.08-3.1 ql2400-firmware-5.08.00-1 ql2500-firmware-5.08.00-1 Emulex lpfc driver 8.3.5.86.1p iSCSI initiator utils iscsi-initiator-utils-6.2.0.873-2 DM-Multipath device-mapper-multipath-0.4.9-64 LVM lvm2-2.02.98-9
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_release_notes/component_versions
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 10.0-4 Fri Jul 14 2017 Steven Levine Update to version for 6.9 GA publication. Revision 10.0-2 Wed Mar 8 2017 Steven Levine Version for 6.9 GA publication. Revision 10.0-1 Fri Dec 16 2016 Steven Levine Version for 6.9 Beta publication. Revision 9.0-13 Tue Nov 8 2016 Steven Levine Small update for 6.8. Revision 9.0-12 Wed Apr 27 2016 Steven Levine Preparing document for 6.8 GA publication. Revision 9.0-10 Wed Mar 9 2016 Steven Levine Initial revision for Red Hat Enterprise Linux 6.8 Beta release Revision 8.0-5 Wed Jul 22 2015 Steven Levine Republish for Red Hat Enterprise Linux 6.7 Revision 8.0-4 Wed Jul 8 2015 Steven Levine Initial revision for Red Hat Enterprise Linux 6.7 Revision 8.0-3 Thu Apr 23 2015 Steven Levine Republish for Red Hat Enterprise Linux 6.7 Beta release Revision 7.0-4 Thu Aug 7 2014 Steven Levine Initial revision for Red Hat Enterprise Linux 6.6 Revision 7.0-3 Thu Aug 7 2014 Steven Levine Initial revision for Red Hat Enterprise Linux 6.6 Beta release Revision 6.0-6 Wed Nov 13 2013 Steven Levine Initial revision for Red Hat Enterprise Linux 6.5 Revision 6.0-5 Fri Sep 27 2013 Steven Levine Initial revision for Red Hat Enterprise Linux 6.5 Beta release Revision 5.0-9 Mon Feb 18 2013 Steven Levine Initial revision for Red Hat Enterprise Linux 6.4 Revision 5.0-7 Mon Nov 26 2012 Steven Levine Initial revision for Red Hat Enterprise Linux 6.4 Beta release Revision 4.0-3 Fri Jun 15 2012 Steven Levine Initial revision for Red Hat Enterprise Linux 6.3 Revision 3.0-3 Thu Dec 1 2011 Steven Levine Initial revision for Red Hat Enterprise Linux 6.2 Revision 3.0-1 Mon Sep 19 2011 Steven Levine Initial revision for Red Hat Enterprise Linux 6.2 Beta release Revision 2.0-1 Thu May 19 2011 Steven Levine Initial revision for Red Hat Enterprise Linux 6.1 Revision 1.0-1 Wed Nov 10 2010 Steven Levine Initial revision for the Red Hat Enterprise Linux 6 release
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/appe-publican-revision_history
Part II. Managing Confined Services
Part II. Managing Confined Services This part of the book focuses more on practical tasks and provides information how to set up and configure various services. For each service, there are listed the most common types and Booleans with the specifications. Also included are real-world examples of configuring those services and demonstrations of how SELinux complements their operation. When SELinux is in enforcing mode, the default policy used in Red Hat Enterprise Linux, is the targeted policy. Processes that are targeted run in a confined domain, and processes that are not targeted run in an unconfined domain. See Chapter 3, Targeted Policy for more information about targeted policy and confined and unconfined processes.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/part_ii-managing_confined_services
Chapter 15. Managing system clocks to satisfy application needs
Chapter 15. Managing system clocks to satisfy application needs Multiprocessor systems such as NUMA or SMP have multiple instances of hardware clocks. During boot time the kernel discovers the available clock sources and selects one to use. To improve performance, you can change the clock source used to meet the minimum requirements of a real-time system. 15.1. Hardware clocks Multiple instances of clock sources found in multiprocessor systems, such as non-uniform memory access (NUMA) and Symmetric multiprocessing (SMP), interact among themselves and the way they react to system events, such as CPU frequency scaling or entering energy economy modes, determine whether they are suitable clock sources for the real-time kernel. The preferred clock source is the Time Stamp Counter (TSC). If the TSC is not available, the High Precision Event Timer (HPET) is the second best option. However, not all systems have HPET clocks, and some HPET clocks can be unreliable. In the absence of TSC and HPET, other options include the ACPI Power Management Timer (ACPI_PM), the Programmable Interval Timer (PIT), and the Real Time Clock (RTC). The last two options are either costly to read or have a low resolution (time granularity), therefore they are sub-optimal for use with the real-time kernel. 15.2. Viewing the available clock sources in your system The list of available clock sources in your system is in the /sys/devices/system/clocksource/clocksource0/available_clocksource file. Procedure Display the available_clocksource file. In this example, the available clock sources in the system are TSC, HPET, and ACPI_PM. 15.3. Viewing the clock source currently in use The currently used clock source in your system is stored in the /sys/devices/system/clocksource/clocksource0/current_clocksource file. Procedure Display the current_clocksource file. In this example, the current clock source in the system is TSC. 15.4. Temporarily changing the clock source to use Sometimes the best-performing clock for a system's main application is not used due to known problems on the clock. After ruling out all problematic clocks, the system can be left with a hardware clock that is unable to satisfy the minimum requirements of a real-time system. Requirements for crucial applications vary on each system. Therefore, the best clock for each application, and consequently each system, also varies. Some applications depend on clock resolution, and a clock that delivers reliable nanoseconds readings can be more suitable. Applications that read the clock too often can benefit from a clock with a smaller reading cost (the time between a read request and the result). In these cases it is possible to override the clock selected by the kernel, provided that you understand the side effects of the override and can create an environment which will not trigger the known shortcomings of the given hardware clock. Important The kernel automatically selects the best available clock source. Overriding the selected clock source is not recommended unless the implications are well understood. Prerequisites You have root permissions on the system. Procedure View the available clock sources. As an example, consider the available clock sources in the system are TSC, HPET, and ACPI_PM. Write the name of the clock source you want to use to the /sys/devices/system/clocksource/clocksource0/current_clocksource file. Note The changes apply to the clock source currently in use. When the system reboots, the default clock is used. To make the change persistent, see Making persistent kernel tuning parameter changes . Verification Display the current_clocksource file to ensure that the current clock source is the specified clock source. The example uses HPET as the current clock source in the system. 15.5. Comparing the cost of reading hardware clock sources You can compare the speed of the clocks in your system. Reading from the TSC involves reading a register from the processor. Reading from the HPET clock involves reading a memory area. Reading from the TSC is faster, which provides a significant performance advantage when timestamping hundreds of thousands of messages per second. Prerequisites You have root permissions on the system. The clock_timing program must be on the system. For more information, see the clock_timing program . Procedure Change to the directory in which the clock_timing program is saved. View the available clock sources in your system. In this example, the available clock sources in the system are TSC , HPET , and ACPI_PM . View the currently used clock source. In this example, the current clock source in the system is TSC . Run the time utility in conjunction with the ./ clock_timing program. The output displays the duration required to read the clock source 10 million times. The example shows the following parameters: real - The total time spent beginning from program invocation until the process ends. real includes user and kernel times, and will usually be larger than the sum of the latter two. If this process is interrupted by an application with higher priority, or by a system event such as a hardware interrupt (IRQ), this time spent waiting is also computed under real . user - The time the process spent in user space performing tasks that did not require kernel intervention. sys - The time spent by the kernel while performing tasks required by the user process. These tasks include opening files, reading and writing to files or I/O ports, memory allocation, thread creation, and network related activities. Write the name of the clock source you want to test to the /sys/devices/system/clocksource/clocksource0/current_clocksource file. In this example, the current clock source is changed to HPET . Repeat steps 4 and 5 for all of the available clock sources. Compare the results of step 4 for all of the available clock sources. Additional resources time(1) man page on your system 15.6. Synchronizing the TSC timer on Opteron CPUs The current generation of AMD64 Opteron processors can be susceptible to a large gettimeofday skew. This skew occurs when both cpufreq and the Time Stamp Counter (TSC) are in use. RHEL for Real Time provides a method to prevent this skew by forcing all processors to simultaneously change to the same frequency. As a result, the TSC on a single processor never increments at a different rate than the TSC on another processor. Prerequisites You have root permissions on the system. Procedure Enable the clocksource=tsc and powernow-k8.tscsync=1 kernel options: This forces the use of TSC and enables simultaneous core processor frequency transitions. Restart the machine. Additional resources gettimeofday(2) man page on your system 15.7. The clock_timing program The clock_timing program reads the current clock source 10 million times. In conjunction with the time utility it measures the amount of time needed to do this. Procedure To create the clock_timing program: Create a directory for the program files. Change to the created directory. Create a source file and open it in a text editor. Enter the following into the file: Save the file and exit the editor. Compile the file. The clock_timing program is ready and can be run from the directory in which it is saved.
[ "cat /sys/devices/system/clocksource/clocksource0/available_clocksource tsc hpet acpi_pm", "cat /sys/devices/system/clocksource/clocksource0/current_clocksource tsc", "cat /sys/devices/system/clocksource/clocksource0/available_clocksource tsc hpet acpi_pm", "echo hpet > /sys/devices/system/clocksource/clocksource0/current_clocksource", "cat /sys/devices/system/clocksource/clocksource0/current_clocksource hpet", "cd clock_test", "cat /sys/devices/system/clocksource/clocksource0/available_clocksource tsc hpet acpi_pm", "cat /sys/devices/system/clocksource/clocksource0/current_clocksource tsc", "time ./clock_timing real 0m0.601s user 0m0.592s sys 0m0.002s", "echo hpet > /sys/devices/system/clocksource/clocksource0/current_clocksource", "grubby --update-kernel=ALL --args=\"clocksource=tsc powernow-k8.tscsync=1\"", "mkdir clock_test", "cd clock_test", "{EDITOR} clock_timing.c", "#include <time.h> void main() { int rc; long i; struct timespec ts; for(i=0; i<10000000; i++) { rc = clock_gettime(CLOCK_MONOTONIC, &ts); } }", "gcc clock_timing.c -o clock_timing -lrt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/managing-system-clocks-to-satisfy-application-needs_optimizing-RHEL9-for-real-time-for-low-latency-operation
20.5. Language-Specific Changes
20.5. Language-Specific Changes Arabic New Arabic fonts from Paktype are available in Red Hat Enterprise Linux 7: paktype-ajrak, paktype-basic-naskh-farsi, paktype-basic-naskh-sindhi, paktype-basic-naskh-urdu, and paktype-basic-naskh-sa. Chinese The WQY Zenhei font is now the default font for Simplified Chinese. The default engine for Simplified Chinese has been changed to ibus-libpinyin from ibus-pinyin that Red Hat Enterprise Linux 6 uses. Indic The new Lohit Devanagari font replaces the separate Lohit fonts for Hindi, Kashmiri, Konkani, Maithili, Marathi, and Nepali. Any distinct glyphs for these languages needed in the future can be handled in Lohit Devanagari with the Open Type Font locl tags. New font packages gubbi-fonts and navilu-fonts have been added for Kannada language. Japanese IPA fonts are no longer installed by default ibus-kkc, the Kana Kanji Conversion, is the new default Japanese input method engine using the new libkkc back end. It replaces ibus-anthy, anthy, and kasumi. Korean The Nanum font is used by default now. New Locales Red Hat Enterprise Linux 7 supports new locales, Konkani (kok_IN) and Pushto (ps_AF).
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-internationalization-language_specific_changes
20.13. Using Pass-Through Authentication
20.13. Using Pass-Through Authentication Pass-through authentication (PTA) is a mechanism which allows one Red Hat Directory Server instance to consult another to authenticate bind requests. Pass-through authentication is implement through the PTA Plug-in; when enabled, the plug-in lets a Directory Server instance accept simple bind operations (password-based) for entries not stored in its local database. Directory Server uses PTA to administer the user and configuration directories on separate instances of Directory Server. The first instance acts as the PTA Directory Server which is the server that passes through bind requests to another Directory Server. The second instance acts as the authenticating directory, which is the server that contains the entry and verifies the bind credentials of the requesting client. The pass-through subtree is the subtree not present on the PTA directory. When a user's bind DN contains this subtree, the user's credentials are passed on to the authenticating directory. Figure 20.2. Simple Pass-Through Authentication Process Here's how pass-through authentication works: The configuration Directory Server (authenticating directory) is installed on machine A. The configuration directory always contains the suffix with the authenticating user entry, for example, o=RedHat . In this example, the server name is authdir.example.com . The user Directory Server (PTA directory) is then installed on machine B. The user directory stores the root suffix, such as dc=example,dc=com . In this example, the server name is userdir.example.com . Set up the plug-in on userdir.example.com by using the following commands: Restart Directory Server on userdir.example.com . The user directory is now configured to send all bind requests for entries with a DN containing o=RedHat to the configuration directory authdir.example.com . The user directory allows any user from o=RedHat to bind. 20.13.1. PTA Plug-in Syntax PTA Plug-in configuration information is specified in the cn=Pass Through Authentication , cn=plugins,cn=config entry on the PTA directory (the user directory configured to pass through bind requests to the authenticating directory) using the required PTA syntax. Use the following commands to manage pass-through authentication URLs: To add a pass-through authentication URL: To modify a pass-through authentication URL: To remove pass-through authentication URL: The variable components of the PTA plug-in syntax are described in Table 20.3, "PTA Plug-in Parameters" . Note The LDAP URL ( ldap|ldaps:// authDS/subtree ) must be separated from the optional parameters ( maxconns, maxops, timeout, ldver, connlifetime, startTLS ) by a single space. If any of the optional parameters are defined, all of them must be defined, even if only the default values are used. Several authenticating directories or subtrees can be specified by incrementing the nsslapd-pluginarg attribute suffix by one each time, as in Section 20.13.3.2, "Specifying Multiple Authenticating Directory Servers" . For example: The optional parameters are described in the following table in the order in which they appear in the syntax. Table 20.3. PTA Plug-in Parameters Variable Definition state Defines whether the plug-in is enabled or disabled. Acceptable values are on or off . ldap|ldaps Defines whether TLS is used for communication between the two Directory Servers. See Section 20.13.2.1, "Configuring the Servers to Use a Secure Connection" for more information. authDS The authenticating directory host name. The port number of the Directory Server can be given by adding a colon and then the port number. For example, ldap://dirserver.example.com:389/ . If the port number is not specified, the PTA server attempts to connect using either of the standard ports: Port 389 if ldap:// is specified in the URL. Port 636 if ldaps:// is specified in the URL. See Section 20.13.2.2, "Specifying the Authenticating Directory Server" for more information. subtree The pass-through subtree . The PTA Directory Server passes through bind requests to the authenticating Directory Server from all clients whose DN is in this subtree. See Section 20.13.2.3, "Specifying the Pass-Through Subtree" for more information. This subtree must not exist on this server. maxconns Optional . The maximum number of connections the PTA directory can simultaneously open to the authenticating directory. The default is 3 . See Section 20.13.2.4, "Configuring the Optional Parameters" for more information. maxops Optional . The maximum number of simultaneous operations (usually bind requests) the PTA directory can send to the authenticating directory within a single connection. The default is 5 . See Section 20.13.2.4, "Configuring the Optional Parameters" for more information. timeout Optional . The time limit, in seconds, that the PTA directory waits for a response from the authenticating Directory Server. If this timeout is exceeded, the server returns an error to the client. The default is 300 seconds (five minutes). Specify zero ( 0 ) to indicate no time limit should be enforced. See Section 20.13.2.4, "Configuring the Optional Parameters" for more information. ldver Optional . The version of the LDAP protocol used to connect to the authenticating directory. Directory Server supports LDAP version 2 and 3. The default is version 3, and Red Hat strongly recommends against using LDAPv2, which is old and will be deprecated. See Section 20.13.2.4, "Configuring the Optional Parameters" for more information. connlifetime Optional . The time limit, in seconds, within which a connection may be used. If a bind request is initiated by a client after this time has expired, the server closes the connection and opens a new connection to the authenticating directory. The server will not close the connection unless a bind request is initiated and the directory determines the connection lifetime has been exceeded. If this option is not specified, or if only one host is listed, no connection lifetime will be enforced. If two or more hosts are listed, the default is 300 seconds (five minutes). See Section 20.13.2.4, "Configuring the Optional Parameters" for more information. startTLS Optional . A flag of whether to use STARTTLS for the connection to the authenticating directory. STARTTLS establishes a secure connection over the standard port, so it is useful for connecting using LDAP instead of LDAPS. The TLS server and CA certificates need to be available on both of the servers. The default is 0 , which is off. To enable STARTTLS, set it to 1 . To use STARTTLS, the LDAP URL must use ldap: , not ldaps: . See Section 20.13.2.4, "Configuring the Optional Parameters" for more information. 20.13.2. Configuring the PTA Plug-in To modify the PTA configuration: Use the dsconf plugin pass-through-auth command to modify the cn=Pass Through Authentication,cn=plugins,cn=config entry. Restart Directory Server. Before configuring any of the PTA Plug-in parameters, the PTA Plug-in entry must be present in the Directory Server. If this entry does not exist, create it with the appropriate syntax, as described in Section 20.13.1, "PTA Plug-in Syntax" . Note If the user and configuration directories are installed on different instances of the directory, the PTA Plug-in entry is automatically added to the user directory's configuration and enabled. This section provides information about configuring the plug-in in the following sections: Section 20.13.2.1, "Configuring the Servers to Use a Secure Connection" Section 20.13.2.2, "Specifying the Authenticating Directory Server" Section 20.13.2.3, "Specifying the Pass-Through Subtree" Section 20.13.2.4, "Configuring the Optional Parameters" 20.13.2.1. Configuring the Servers to Use a Secure Connection The PTA directory can be configured to communicate with the authenticating directory over TLS by specifying LDAPS in the LDAP URL of the PTA directory. For example: 20.13.2.2. Specifying the Authenticating Directory Server The authenticating directory contains the bind credentials for the entry with which the client is attempting to bind. The PTA directory passes the bind request to the host defines as the authenticating directory. To specify the authenticating Directory Server, replace authDS in the LDAP URL of the PTA directory with the authenticating directory's host name, as described in Table 20.3, "PTA Plug-in Parameters" . Use the dsconf plugin pass-through-auth command to edit the PTA Plug-in entry: Optionally, include the port number. If the port number is not given, the PTA Directory Server attempts to connect using either the standard port (389) for ldap:// or the secure port (636) for ldaps:// . If the connection between the PTA Directory Server and the authenticating Directory Server is broken or the connection cannot be opened, the PTA Directory Server sends the request to the server specified, if any. There can be multiple authenticating Directory Servers specified, as required, to provide failover if the first Directory Server is unavailable. All of the authentication Directory Server are set in the nsslapd-pluginarg0 attribute. Multiple authenticating Directory Servers are listed in a space-separate list of host:port pairs, with this format: Restart the server. 20.13.2.3. Specifying the Pass-Through Subtree The PTA directory passes through bind requests to the authenticating directory from all clients with a DN defined in the pass-through subtree. The subtree is specified by replacing the subtree parameter in the LDAP URL of the PTA directory. The pass-through subtree must not exist in the PTA directory. If it does, the PTA directory attempts to resolve bind requests using its own directory contents and the binds fail. Use the dsconf plugin pass-through-auth command to import the LDIF file into the directory: For information on the variable components in this syntax, see Table 20.3, "PTA Plug-in Parameters" . Restart the server: 20.13.2.4. Configuring the Optional Parameters Additional parameters the control the PTA connection can be set with the LDAP URL. The maximum number of connections the PTA Directory Server can open simultaneously to the authenticating directory, represented by maxconns in the PTA syntax. The default value is 3 . The maximum number of bind requests the PTA Directory Server can send simultaneously to the authenticating Directory Server within a single connection. In the PTA syntax, this parameter is maxops . The default is value is 5 . The time limit for the PTA Directory Server to wait for a response from the authenticating Directory Server. In the PTA syntax, this parameter is timeout . The default value is 300 seconds (five minutes). The version of the LDAP protocol for the PTA Directory Server to use to connect to the authenticating Directory Server. In the PTA syntax, this parameter is ldver . The default is LDAPv3 . The time limit in seconds within which a connection may be used. If a bind request is initiated by a client after this time has expired, the server closes the connection and opens a new connection to the authenticating Directory Server. The server will not close the connection unless a bind request is initiated and the server determines the timeout has been exceeded. If this option is not specified or if only one authenticating Directory Server is listed in the authDS parameter, no time limit will be enforced. If two or more hosts are listed, the default is 300 seconds (five minutes). In the PTA syntax, this parameter is connlifetime . Whether to use STARTTLS for the connection. STARTTLS creates a secure connection over a standard LDAP port. For STARTTLS, the servers must have their server and CA certificates installed, but they do not need to be running in TLS. The default is 0 , which means STARTTLS is off. To enable STARTTLS, set it to 1 . To use STARTTLS, the LDAP URL must use ldap: , not ldaps: . Use the dsconf plugin pass-through-auth command to edit the plug-in entry: (In this example, each of the optional parameters is set to its default value.) Make sure there is a space between the subtree parameter, and the optional parameters. Note Although these parameters are optional, if any one of them is defined, they all must be defined, even if they use the default values. Restart the server: 20.13.3. PTA Plug-in Syntax Examples This section contains the following examples of PTA Plug-in syntax in the dse.ldif file: Section 20.13.3.1, "Specifying One Authenticating Directory Server and One Subtree" Section 20.13.3.2, "Specifying Multiple Authenticating Directory Servers" Section 20.13.3.3, "Specifying One Authenticating Directory Server and Multiple Subtrees" Section 20.13.3.4, "Using Non-Default Parameter Values" Section 20.13.3.5, "Specifying Different Optional Parameters and Subtrees for Different Authenticating Directory Servers" 20.13.3.1. Specifying One Authenticating Directory Server and One Subtree This example configures the PTA Plug-in to accept all defaults for the optional variables. This configuration causes the PTA Directory Server to connect to the authenticating Directory Server for all bind requests to the o=example subtree. The host name of the authenticating Directory Server is configdir.example.com . 20.13.3.2. Specifying Multiple Authenticating Directory Servers If the connection between the PTA Directory Server and the authenticating Directory Server is broken or the connection cannot be opened, the PTA Directory Server sends the request to the server specified, if any. There can be multiple authenticating Directory Servers specified, as required, to provide failover if the first Directory Server is unavailable. All of the authentication Directory Server are set in the nsslapd-pluginarg0 attribute. Multiple authenticating Directory Servers are listed in a space-separate list of host:port pairs. For example: Note The nsslapd-pluginarg0 attribute sets the authentication Directory Server; additional nsslapd-pluginargN attributes can set additional suffixes for the PTA Plug-in to use, but not additional hosts . 20.13.3.3. Specifying One Authenticating Directory Server and Multiple Subtrees The following example configures the PTA Directory Server to pass through bind requests for more than one subtree (using parameter defaults): 20.13.3.4. Using Non-Default Parameter Values This example uses a non-default value ( 10 ) only for the maximum number of connections parameter maxconns . Each of the other parameters is set to its default value. However, because one parameter is specified, all parameters must be defined explicitly in the syntax. 20.13.3.5. Specifying Different Optional Parameters and Subtrees for Different Authenticating Directory Servers To specify a different pass-through subtree and optional parameter values for each authenticating Directory Server, set more than one LDAP URL/optional parameters pair. Separate the LDAP URL/optional parameter pairs with a single space as follows.
[ "dsconf -D \"cn=Directory Manager\" ldap://userdir.example.com plugin pass-through-auth enable dsconf -D \"cn=Directory Manager\" ldap://userdir.example.com plugin pass-through-auth url add \" ldap://authdir.example.com/o=RedHat \"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin pass-through-auth url add URL", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin pass-through-auth url modify old_URL new_URL", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin pass-through-auth url delete URL", "nsslapd-pluginarg0: LDAP URL for the first server nsslapd-pluginarg1: LDAP URL for the second server nsslapd-pluginarg2: LDAP URL for the third server", "nsslapd-pluginarg0: ldaps://ldap.example.com:636/o=example", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin pass-through-auth add ldap://server.example.com/o=example", "ldap|ldaps://host1:port1 host2:port2/ subtree", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin pass-through-auth add ldap://server.example.com/o=example", "dsctl instance_name restart", "ldap|ldaps:// authDS/subtree maxconns, maxops, timeout, ldver, connlifetime, startTLS", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin pass-through-auth add ldap://server.example.com/o=example 3,5,300,3,300,0", "dsctl instance_name restart", "dn: cn=Pass Through Authentication,cn=plugins,cn=config nsslapd-pluginEnabled: on nsslapd-pluginarg0: ldap://configdir.example.com/o=example", "dn: cn=Pass Through Authentication,cn=plugins,cn=config nsslapd-pluginEnabled: on nsslapd-pluginarg0: ldap://configdir.example.com:389 config2dir.example.com:1389/o=example", "dn: cn=Pass Through Authentication,cn=plugins,cn=config nsslapd-pluginEnabled: on nsslapd-pluginarg0: ldap://configdir.example.com/o=example nsslapd-pluginarg1: ldap://configdir.example.com/dc=example,dc=com", "dn: cn=Pass Through Authentication,cn=plugins,cn=config nsslapd-pluginEnabled: on nsslapd-pluginarg0: ldap://configdir.example.com/o=example 10,5,300,3,300,1", "dn: cn=Pass Through Authentication,cn=plugins,cn=config nsslapd-pluginEnabled: on nsslapd-pluginarg0:ldap://configdir.example.com/o=example 10,15,30,3,600,0 nsslapd-pluginarg1:ldap://config2dir.example.com/dc=example,dc=com 7,7,300,3,300,1" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Using_the_Pass_through_Authentication_Plug_in
Chapter 4. Enabling Windows container workloads
Chapter 4. Enabling Windows container workloads Before adding Windows workloads to your cluster, you must install the Windows Machine Config Operator (WMCO), which is available in the OpenShift Container Platform OperatorHub. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Note Dual NIC is not supported on WMCO-managed Windows instances. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed your cluster using installer-provisioned infrastructure, or using user-provisioned infrastructure with the platform: none field set in your install-config.yaml file. You have configured hybrid networking with OVN-Kubernetes for your cluster. This must be completed during the installation of your cluster. For more information, see Configuring hybrid networking . You are running an OpenShift Container Platform cluster version 4.6.8 or later. Note The WMCO is not supported in clusters that use a cluster-wide proxy because the WMCO is not able to route traffic through the proxy connection for the workloads. Additional resources For the comprehensive prerequisites for the Windows Machine Config Operator, see Understanding Windows container workloads . 4.1. Installing the Windows Machine Config Operator You can install the Windows Machine Config Operator using either the web console or OpenShift CLI ( oc ). 4.1.1. Installing the Windows Machine Config Operator using the web console You can use the OpenShift Container Platform web console to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure From the Administrator perspective in the OpenShift Container Platform web console, navigate to the Operators OperatorHub page. Use the Filter by keyword box to search for Windows Machine Config Operator in the catalog. Click the Windows Machine Config Operator tile. Review the information about the Operator and click Install . On the Install Operator page: Select the stable channel as the Update Channel . The stable channel enables the latest stable release of the WMCO to be installed. The Installation Mode is preconfigured because the WMCO must be available in a single namespace only. Choose the Installed Namespace for the WMCO. The default Operator recommended namespace is openshift-windows-machine-config-operator . Click the Enable Operator recommended cluster monitoring on the Namespace checkbox to enable cluster monitoring for the WMCO. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . The WMCO is now listed on the Installed Operators page. Note The WMCO is installed automatically into the namespace you defined, like openshift-windows-machine-config-operator . Verify that the Status shows Succeeded to confirm successful installation of the WMCO. 4.1.2. Installing the Windows Machine Config Operator using the CLI You can use the OpenShift CLI ( oc ) to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure Create a namespace for the WMCO. Create a Namespace object YAML file for the WMCO. For example, wmco-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: "true" 2 1 It is recommended to deploy the WMCO in the openshift-windows-machine-config-operator namespace. 2 This label is required for enabling cluster monitoring for the WMCO. Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-namespace.yaml Create the Operator group for the WMCO. Create an OperatorGroup object YAML file. For example, wmco-og.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator Create the Operator group: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-og.yaml Subscribe the namespace to the WMCO. Create a Subscription object YAML file. For example, wmco-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: "stable" 1 installPlanApproval: "Automatic" 2 name: "windows-machine-config-operator" source: "redhat-operators" 3 sourceNamespace: "openshift-marketplace" 4 1 Specify stable as the channel. 2 Set an approval strategy. You can set Automatic or Manual . 3 Specify the redhat-operators catalog source, which contains the windows-machine-config-operator package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). 4 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-sub.yaml The WMCO is now installed to the openshift-windows-machine-config-operator . Verify the WMCO installation: USD oc get csv -n openshift-windows-machine-config-operator Example output NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded 4.2. Configuring a secret for the Windows Machine Config Operator To run the Windows Machine Config Operator (WMCO), you must create a secret in the WMCO namespace containing a private key. This is required to allow the WMCO to communicate with the Windows virtual machine (VM). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You created a PEM-encoded file containing an RSA key. Procedure Define the secret required to access the Windows VMs: USD oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> \ -n openshift-windows-machine-config-operator 1 1 You must create the private key in the WMCO namespace, like openshift-windows-machine-config-operator . It is recommended to use a different private key than the one used when installing the cluster. 4.3. Additional resources Generating a key pair for cluster node SSH access Adding Operators to a cluster .
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2", "oc create -f <file-name>.yaml", "oc create -f wmco-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator", "oc create -f <file-name>.yaml", "oc create -f wmco-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4", "oc create -f <file-name>.yaml", "oc create -f wmco-sub.yaml", "oc get csv -n openshift-windows-machine-config-operator", "NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded", "oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/windows_container_support_for_openshift/enabling-windows-container-workloads
Chapter 3. Enabling automated deployments of JBoss Web Server
Chapter 3. Enabling automated deployments of JBoss Web Server The JBoss Web Server collection provides a comprehensive set of variables and default values that you can manually update to match your setup requirements. These variable settings provide all the information that the JBoss Web Server collection requires to complete an automated and customized installation of Red Hat JBoss Web Server on your target hosts. For a full list of variables that the JBoss Web Server collection provides, see the information page for the jws role in Ansible automation hub . The information page for the jws role lists the names, descriptions, and default values for all the variables that you can define. Note You can define variables in multiple ways. By default, the JBoss Web Server collection includes an example playbook.yml file that links to a vars.yml file in the same playbooks folder. For illustrative purposes, the instructions in this section describe how to define variables in the vars.yml file that the collection provides. You can use a different way to define variables if you prefer. You can define variables to automate the following tasks: Install JBoss Web Server from archive files that you can choose to download either automatically or manually from the Red Hat Customer Portal . Install JBoss Web Server from RPM packages . Ensure that a supported JDK version is installed on your target hosts . Ensure that a product user account and group are created on your target hosts . Integrate JBoss Web Server with systemd . Configure the JBoss Web Server installation . You can also automate the deployment of web applications by adding customized tasks to the playbook, as described in Enabling the automated deployment of JBoss Web Server applications on your target hosts . 3.1. Enablement of automated installations of JBoss Web Server from archive files By default, the JBoss Web Server collection is configured to install Red Hat JBoss Web Server on each target host from product archive files. Depending on your setup requirements, you can enable the JBoss Web Server collection to install a base product release, product patch updates, or both simultaneously from archive files. You can choose to download the archive files manually from the Red Hat Customer Portal or enable the JBoss Web Server collection to download the archive files automatically. 3.1.1. Enabling the automated installation of a JBoss Web Server base release You can enable the JBoss Web Server collection to install the base release of a specified JBoss Web Server version from product archive files. A base release is the initial release of a specific product version (for example, 6.0.0 is the base release of version 6.0). The JBoss Web Server collection requires that local copies of the appropriate archive files are available on your Ansible control node. If copies of the archive files are not already on your system, you can set variables to specify Red Hat service account credentials to permit automatic file downloads from the Red Hat Customer Portal. Alternatively, you can download the archive files manually. Prerequisites You have installed the JBoss Web Server collection . If copies of the JBoss Web Server archive files are already on your system, you have copied these archive files to your Ansible control node. If you want the JBoss Web Server collection to download archive files automatically from the Red Hat Customer Portal, you have created a Red Hat service account. Note Service accounts enable you to securely and automatically connect and authenticate services or applications without requiring end-user credentials or direct interaction. To create a service account, log in to the Service Accounts page in the Red Hat Hybrid Cloud Console, and click Create service account . If you prefer to download the archive files manually, you have downloaded the appropriate archive files to your Ansible control node. For more information, see the Red Hat JBoss Web Server Installation Guide . Note If you manually download the archive files, you do not need to extract these files on your Ansible control node. In this situation, the JBoss Web Server collection extracts the archive files automatically. Procedure On your Ansible control node, open the vars.yml file. To specify the JBoss Web Server version that you want to install, set the jws_version variable to the appropriate base release. For example: Note Ensure that the value you specify for the jws_version variable matches the version of the product archive files that you want to install. For example, to install the archive files for JBoss Web Server 6.0, specify a value of 6.0.0 . By default, the JBoss Web Server collection is configured to install both the main application server archive and the native archive for the product version that you specify. If you set the jws_native variable to False , the JBoss Web Server collection cannot install the native archive, which causes issues for features such as SELinux policies that require the installation of a native archive file. If you do not specify credentials for automatic file downloads as described in Step 3 , ensure that you have copied the archive files for the specified product version to your Ansible control node. In this situation, ensure that the copied native archive file matches the operating system version that is installed on your target hosts. If copies of the JBoss Web Server archive files do not exist on your Ansible control, the collection contacts the Red Hat Customer Portal by default to download the archive files automatically. To ensure successful contact with the Red Hat Customer Portal, set the rhn_username and rhn_password variables to specify your Red Hat service account credentials. For example: In the preceding example, replace <client_ID> and <client_secret> with the client ID and secret that are associated with your Red Hat service account. Note By default, the collection automatically determines which native archive file matches the operating system version that is installed on your target hosts. If copies of the appropriate archive files already exist on your Ansible control node, the collection does not download these archive files again. If you prefer to download the archive files manually or you have already obtained these files in some other way, you can enforce a fully offline installation. For more information about enforcing offline installations, see Enabling the automated installation of JBoss Web Server product patch updates . If you changed the names of the downloaded archive files on your Ansible control node, set the zipfile_name and jws_native_zipfile variables to specify the files that you want to install. For example: In the preceding example, replace <application_server_file> and <native_file> with the appropriate archive file names. Note If you did not change the file names, you do not need to set the zipfile_name and jws_native_zipfile variables. The JBoss Web Server collection uses the value of the jws_version variable to determine the default file names automatically. Save your changes to the vars.yml file. By setting these variables, as appropriate, you enable the JBoss Web Server collection to install the base product release automatically on your target hosts when you subsequently run the playbook. 3.1.2. Enabling the automated installation of JBoss Web Server patch updates If product patch updates are available for the JBoss Web Server version that is being installed, you can also enable the JBoss Web Server collection to install these patch updates from archive files. Depending on your requirements, you can enable the JBoss Web Server collection to install either the latest available patch or a specified patch release. You can use the same steps to enable the automated installation of patch updates regardless of whether you want to install these updates at the same time as the base release or later. The JBoss Web Server collection requires that local copies of the appropriate archive files are available on your Ansible control node. If copies of the archive files are not already on your system, you can set variables to specify Red Hat service account credentials to permit automatic file downloads from the Red Hat Customer Portal. Alternatively, you can download the archive files manually. Note Patch updates are cumulative, which means that each patch update automatically includes any earlier patch releases that are available for the same product version. For example, a 6.0.2 patch update would include the 6.0.1 release, a 6.0.3 patch update would include the 6.0.1 and 6.0.2 releases, and so on. Important You cannot use cumulative patch updates to install the base ( X.X. 0) release of a product version. For example, a 6.0.2 patch would include the 6.0.1 release but cannot install the base 6.0.0 release. In this situation, you must ensure that the base release of the appropriate product version (for example, 6.0.0) is also installed either at the same time or previously. Prerequisites You have installed the JBoss Web Server collection . If copies of the archive files for the patch update that you want to install are already on your system, you have copied these archive files to your Ansible control node. If you want the JBoss Web Server collection to download archive files automatically from the Red Hat Customer Portal, you have created a Red Hat service account. Note Service accounts enable you to securely and automatically connect and authenticate services or applications without requiring end-user credentials or direct interaction. To create a service account, log in to the Service Accounts page in the Red Hat Hybrid Cloud Console, and click Create service account . If you prefer to download the archive files manually, you have downloaded the appropriate archive files to your Ansible control node. For more information, see the Red Hat JBoss Web Server Installation Guide . Note Because patch updates are cumulative, you only need to download the archive files for the patch release that you want to install. You do not need to download any patch updates. If you manually download the archive files, you do not need to extract these files on your Ansible control node. In this situation, the JBoss Web Server collection extracts the archive files automatically. Procedure On your Ansible control node, open the vars.yml file. Set the jws_apply_patches variable to True . For example: Note Ensure that the jws_version variable is set to the base release for the appropriate product version (for example, 6.0.0 ). The JBoss Web Server collection is configured to install the latest patch update by default. The collection contacts the Red Hat Customer Portal to determine the correct patch to install. If you want the collection to install a specified patch release rather than the latest patch update, set the jws_patch_version variable to the patch release that you want to install. For example: Based on the preceding example, the collection installs the cumulative 6.0.2 patch only, even if later patches are also available. When the jws_apply_patches variable is set to True , the JBoss Web Server collection contacts the Red Hat Customer Portal by default to check if new patch updates are available. The collection also downloads patch updates, if necessary. To ensure successful contact with the Red Hat Customer Portal, set the rhn_username and rhn_password variables to specify your Red Hat service account credentials. For example: In the preceding example, replace <client_ID> and <client_secret> with the client ID and secret that are associated with your Red Hat service account. Note By default, the collection automatically determines which native archive file matches the operating system version that is installed on your target hosts. If copies of the appropriate archive files already exist on your Ansible control node, the collection does not download these archive files again. If the jws_patch_version variable is set to a specific patch release, the collection downloads the specified patch release only, even if later patches are also available. If you prefer to download the archive files manually or you have already obtained these files in some other way, you can enforce a fully offline installation as described in Step 5 . If you want to enforce a fully offline installation and prevent the collection from contacting the Red Hat Customer Portal, set the jws_offline_install variable to True . For example: Note The jws_offline_install variable is useful if your Ansible control node does not have internet access or you want the collection to avoid contacting the Red Hat Customer Portal for file downloads. In this situation, you must set the jws_patch_version variable to the patch release you want to install. Ensure that you have copied the archive files for the appropriate patch update to your Ansible control node. In this situation, ensure that the copied native archive file matches the operating system version that is installed on your target hosts. If you set the jws_offline_install variable to True , the collection does not attempt to contact the Red Hat Customer Portal, even if you have also set the rhn_username and rhn_password variables to permit automatic file downloads. Save your changes to the vars.yml file. By setting these variables, as appropriate, you enable the JBoss Web Server collection to install the product patch updates automatically on your target hosts when you subsequently run the playbook. 3.2. Enabling the automated installation of JBoss Web Server from RPM packages You can enable the JBoss Web Server collection to install Red Hat JBoss Web Server on each target host from RPM packages. In this situation, the JBoss Web Server collection automatically obtains the RPM packages directly from Red Hat. Note When you enable the RPM installation method, the JBoss Web Server collection installs the latest RPM packages for the specified major version of JBoss Web Server, including any minor version and patch updates. Prerequisites Your system is compliant with Red Hat Enterprise Linux package requirements . You have registered your system with Red Hat Subscription Management and subscribed to the relevant Content Delivery Network (CDN) repositories . You have installed the JBoss Web Server collection . Procedure On your Ansible control node, open the vars.yml file. To specify the JBoss Web Server version that you want to install, set the jws_version variable to the appropriate major product version. For example: Note In this situation, the JBoss Web Server collection checks the first digit in the specified value to determine the major product version that you want to install. For example, if you want the collection to install the latest available RPM packages for JBoss Web Server 6, you can specify a value of 6.0.0 . Regardless of the minor version and release number that you specify (for example, 0.0 ), the collection installs the packages for the latest minor version and patch release of the specified major version. To enable installation from RPM packages, set the jws_install_method variable to rpm . For example: Save your changes to the vars.yml file. By setting these variables, you enable the JBoss Web Server collection to obtain and automatically install the RPM packages for the specified product version on your target hosts when you subsequently run the playbook. Note If you enable the installation of RPM packages for JBoss Web Server 6.0, the collection installs JBoss Web Server in the /opt/rh/jws6/root/usr/share/tomcat directory. If you want to use a different installation directory, you can manually create a symbolic link to /opt/rh/jws6/root/usr/share/tomcat on each target host. 3.3. Ensuring that a JDK is installed on the target hosts JBoss Web Server requires that a Java Development Kit (JDK) is already installed as a prerequisite on your target hosts to ensure that JBoss Web Server operates successfully. A JDK includes a Java Runtime Environment (JRE) and Java Virtual Machine (JVM), which must be available on any host where you want to run JBoss Web Server. For a full list of JDK versions that JBoss Web Server supports, see JBoss Web Server 6 Supported Configurations . By default, the JBoss Web Server collection does not install a JDK automatically, based on the assumption that you have already installed a supported JDK on the target hosts. However, for the sake of convenience, you can configure the JBoss Web Server collection to install a supported version of Red Hat build of OpenJDK automatically on each target host. Consider the following guidelines for installing a JDK when you use the JBoss Web Server collection: If you want to install a supported version of Red Hat build of OpenJDK on your target hosts, you can set the jws_java_version variable to the appropriate JDK version (for example, 11 or 17 ). The JBoss Web Server collection automatically installs the specified Red Hat build of OpenJDK version on each target host when you subsequently run the playbook. If you want to install a supported version of IBM JDK or Oracle JDK, you must install the JDK manually on each target host or you can automate this process by using your own playbook. For more information about manually installing a version of IBM JDK or Oracle JDK, see the Red Hat JBoss Web Server Installation Guide . In this situation, you do not need to set a variable. If you already have a supported JDK installed on your target hosts, you do not need to set a variable. Note Use the following procedure if you want to enable the JBoss Web Server collection to install Red Hat build of OpenJDK on target hosts where a supported JDK is not already installed. Prerequisites You have installed the JBoss Web Server collection . Procedure On your Ansible control node, open the vars.yml file. Set the jws_java_version variable to the appropriate OpenJDK version that you want to install. For example: Based on the preceding example, the JBoss Web Server collection automatically installs Red Hat build of OpenJDK 11 on each target host when you run the playbook. Note Alternatively, if you want the JBoss Web Server collection to install Red Hat build of OpenJDK version 17, set the jws_java_version variable to 17 . Save your changes to the vars.yml file. 3.4. Ensuring that a product user and group are created on the target hosts JBoss Web Server requires that a product user account and user group are already created as a prerequisite on your target hosts. By default, the JBoss Web Server collection handles this requirement by creating a tomcat user account and a tomcat group automatically on each target host. However, if you want the JBoss Web Server collection to create a different user account and group, you can modify the behavior of the JBoss Web Server collection to match your setup requirements. The product user account is also assigned ownership of the Tomcat directories to run the Tomcat service. Note Use the following procedure if you want to enable the JBoss Web Server collection to create a different user account and group rather than the tomcat default values. Prerequisites You have installed the JBoss Web Server collection . Procedure On your Ansible control node, open the vars.yml file. Set the jws_user and jws_group variables to the appropriate product user name and group name that you want to create. For example: Based on the preceding example, the JBoss Web Server collection automatically creates a myuser user account and group instead of creating the default tomcat user account and group. Save your changes to the vars.yml file. 3.5. Enabling the automated integration of JBoss Web Server with systemd You can optionally enable the JBoss Web Server collection to set up JBoss Web Server as a service that a system daemon can manage. By default, the JBoss Web Server collection is not configured to integrate JBoss Web Server with a system daemon. If you enable this feature, the JBoss Web Server collection sets up JBoss Web Server as a jws6‐tomcat service automatically on each target host. However, if you want to use a different service name, you can modify the behavior of the JBoss Web Server collection to match your setup requirements. When you integrate JBoss Web Server with a system daemon, the system daemon can automatically start the JBoss Web Server services at system startup. The system daemon also provides functions to start, stop, and check the status of the product. The default system daemon is systemd . Note This configuration task is optional but recommended. Prerequisites You have installed the JBoss Web Server collection . Procedure On your Ansible control node, open the vars.yml file. To enable integration with systemd , set the jws_systemd_enabled variable to True . For example: If you want JBoss Web Server to use a service name other than jws6‐tomcat , set the jws_service_name variable to the appropriate value. For example: Based on the preceding example, the JBoss Web Server collection sets up the product as a jws service on each target host when you run the playbook. Note If you do not set the jws_service_name variable, the JBoss Web Server collection sets up the product as a jws6‐tomcat service automatically. If you did not enable the automated installation of Red Hat build of OpenJDK, also set the jws_java_home variable to specify the full path to the JDK that is installed on your target hosts. For example: Note To ensure successful integration with systemd , if you do not enable the automated installation of Red Hat build of OpenJDK, you must set the jws_java_home variable. This step is not required if you enable the automated installation of Red Hat build of OpenJDK, as described in Ensuring that a JDK is installed on the target hosts . Save your changes to the vars.yml file. 3.6. Enablement of automated JBoss Web Server configuration tasks The JBoss Web Server collection provides a comprehensive set of variables to enable the automated configuration of a JBoss Web Server installation. By default, the JBoss Web Server collection configures JBoss Web Server to listen for nonsecure HTTP connections on port 8080 . Other product features such as the following are disabled by default: Support for secure HTTPS connections Mod_cluster support for load-balancing HTTP server requests to the JBoss Web Server back end The password vault for storing sensitive data in an encrypted Java keystore To enable a wider set of product features, you can define variables to modify the behavior of the JBoss Web Server collection to match your setup requirements. Note The following subsections describe only a subset of the automated configuration updates that the JBoss Web Server collection can perform. These example updates focus on enabling support for HTTPS connections, enabling mod_cluster support, and enabling the password vault. For a full list of variables that the JBoss Web Server collection provides, refer to the information page for the jws role in Ansible automation hub . For more information about configuring and using JBoss Web Server features, refer to the Red Hat JBoss Web Server documentation page . 3.6.1. Enabling the automated configuration of HTTPS support in JBoss Web Server You can configure JBoss Web Server to support secure encrypted connections between web clients and the web server over the HTTPS protocol. Consider the following guidelines for enabling HTTPS support when you use the JBoss Web Server collection: If you want to enable HTTPS support, you must ensure that a Java keystore exists on each target host before you subsequently run the playbook. The JBoss Web Server collection does not provide or create a Java keystore automatically. In this situation, you must create a new keystore on your target hosts or copy an existing keystore file to each target host, as described in Step 1 of the following procedure. To enable HTTPS support, you can set a jws_listen_https_enabled variable to True . When you enable HTTPS support, the JBoss Web Server collection updates the server.xml file on each target host with the appropriate path and password settings for the Java keystore. By default, the JBoss Web Server collection configures these path and password settings in the server.xml file with values of /etc/ssl/keystore.jks and changeit , respectively. However, if you want to use a different keystore path or keystore password, you can modify the behavior of the JBoss Web Server collection to match your setup requirements. Prerequisites You have installed the JBoss Web Server collection . Procedure If you want to create a Java keystore, perform the following steps: Log in to the target host where you want to create the keystore. Note Ensure that a JDK is already installed and the JAVA_HOME variable is already set on the target host. To create the keystore, enter the following command: In the preceding command, replace <path_to_keystore> with the full path to the keystore file that you want to create. If you do not specify the -keystore option, the command creates the keystore file in some default location that depends on the version of the JDK you have installed. For example, if you are using Red Hat build of OpenJDK, the default location for the keystore is /etc/ssl/keystore.jks . The preceding command generates a keystore file that contains a pair of public and private keys and a single self-signed certificate for server authentication. The key pair and self-signed certificate are stored in a single keystore entry that is identified by the -alias option (for example, tomcat ). When the keytool command prompts you for the following information, enter the appropriate values for your setup: Keystore password (by default, changeit ) General information about the certificate Key password for the certificate (by default, the keystore password) Note Alternatively, rather than create a new keystore, you can use the Linux scp command to copy an existing keystore file between different hosts. To enable support for HTTPS connections, perform the following steps: On your Ansible control node , open the vars.yml file. Set the jws_listen_https_enabled variable to True . For example: If the Java keystore on each target host is located in a path other than /etc/ssl/keystore.jks , set the jws_listen_https_keystore_file variable to the appropriate value. For example: In the preceding example, replace <keystore_path> with the full path to the keystore file that is on each target host. Note If you do not set the jws_listen_https_keystore_file variable, the JBoss Web Server collection automatically configures the certificateKeystoreFile setting in the server.xml file with a value of /etc/ssl/keystore.jks . If the Java keystore on each target host uses a password other than changeit , set the jws_listen_https_keystore_password variable to the appropriate value. For example: In the preceding example, replace <keystore_password> with the correct password for the Java keystore that is on each target host. Note If you do not set the jws_listen_https_keystore_password variable, the JBoss Web Server collection automatically configures the certificateKeystorePassword setting in the server.xml with a value of changeit . Save your changes to the vars.yml file. 3.6.2. Enabling the automated configuration of mod_cluster support in JBoss Web Server The mod_cluster connector is a reduced-configuration and intelligent solution for load-balancing Apache HTTP Server requests to the JBoss Web Server back end. The mod_cluster connector also provides features such as real-time load-balancing calculations, application life-cycle control, automatic proxy discovery, and multiple protocol support. To enable mod_cluster support, you can define variables to enable the mod_cluster listener and specify IP address and port values for the mod_cluster instance. Prerequisites You have installed the JBoss Web Server collection . Procedure On your Ansible control node, open the vars.yml file. To enable the mod_cluster listener, set the jws_modcluster_enabled variable to True . For example: To specify the IP address and port of the mod_cluster instance, set the jws_modcluster_ip and jws_modcluster_port variables to the appropriate values. The default IP address is 127.0.0.1 . The default port is 6666 . For example: In the preceding example, replace <ip_address> with the appropriate bind address for the mod_cluster instance on the target host, and replace <port> with the appropriate port that the mod_cluster instance uses to listen for incoming requests. Save your changes to the vars.yml file. For more information about using mod_cluster , see the HTTP Connectors and Load Balancing Guide . 3.6.3. Enabling the automated configuration of the password vault in JBoss Web Server You can use the password vault for JBoss Web Server to mask passwords and other sensitive strings, and to store sensitive information in an encrypted Java keystore. When you use the password vault, you can stop storing clear-text passwords in your JBoss Web Server configuration files. JBoss Web Server can use the password vault to search for passwords and other sensitive strings from a keystore. To enable password vault, you can set a series of variables that enable you to specify various files and configuration settings that the password vault uses. Prerequisites You have installed the JBoss Web Server collection . You have created the required vault.keystore , VAULT.dat , and vault.properties files. For more information about creating these files, refer to the Red Hat JBoss Web Server Installation Guide . Procedure On your Ansible control node, open the vars.yml file. To specify the paths to the vault.keystore , VAULT.dat , and vault.properties files that you created as part of the prerequisite step, set the following variables to the appropriate values. For example: In the preceding example, ensure that you specify the correct paths that you configured as part of the prerequisite step. To enable the password vault feature, set the jws_tomcat_vault_enabled variable to True . For example: To specify the keystore alias, keystore password, iteration count, and salt values that you configured for the password vault, set the following variables to the appropriate values. For example: In the preceding example, ensure that you specify the appropriate values that you configured as part of the prerequisite step. Save your changes to the vars.yml file. For more information about using the password vault, refer to the Red Hat JBoss Web Server Installation Guide . 3.6.4. SELinux policies for JBoss Web Server You can use Security-Enhanced Linux (SELinux) policies to define access controls for JBoss Web Server. These policies are a set of rules that determine access rights to the product. The SELinux policies feature is enabled by default. When JBoss Web Server is being installed from archive files, the SELinux policies feature requires that the native archive file for the specified product version is also installed. By default, the JBoss Web Server collection is configured to install the native archive file that matches the operating system version on your target hosts. 3.7. Enabling the automated deployment of JBoss Web Server applications on your target hosts You can also automate the deployment of web applications on your target JBoss Web Server hosts by adding customized tasks to the playbook. This requires that you place the application .war file in the appropriate directory. If you want to deploy a new or updated application when JBoss Web Server is already running, the JBoss Web Server collection provides a handler to restart the web server when the application is deployed. Note The following procedure assumes that you have created a custom playbook. Prerequisites You have installed the JBoss Web Server collection . You are familar with general Ansible concepts and creating Ansible playbooks. For more information, see the Ansible documentation . Procedure On your Ansible control node, open your custom playbook. In the tasks: section of the playbook, add a task to deploy the appropriate web application. For example: In the preceding example, replace <url_path> and <app_name> with the correct path and .war file name for the application that you want to deploy. Save your changes to the playbook. Additional resources Files modules Net Tools modules
[ "[...] jws_version: 6.0.0", "[...] rhn_username: <client_ID> rhn_password: <client_secret>", "[...] zipfile_name: <application_server_file> jws_native_zipfile: <native_file>", "[...] jws_version: 6.0.0 [...] jws_apply_patches: True", "[...] jws_apply_patches: True jws_patch_version: 6.0.2", "[...] rhn_username: <client_ID> rhn_password: <client_secret>", "[...] jws_offline_install: True", "[...] jws_version: 6.0.0", "[...] jws_install_method: rpm", "[...] jws_java_version: 11", "[...] jws_user: myuser jws_group: myuser", "[...] jws_systemd_enabled: True", "[...] jws_service_name: jws", "[...] jws_java_home: <JAVA_HOME path>", "USDJAVA_HOME/bin/keytool -genkeypair -alias tomcat -keyalg RSA -keystore <path_to_keystore>", "[...] jws_listen_https_enabled: True", "[...] jws_listen_https_keystore_file: <keystore_path>", "[...] jws_listen_https_keystore_password: <keystore_password>", "[...] jws_modcluster_enabled: True", "[...] jws_modcluster_ip: <ip_address> jws_modcluster_port: <port>", "[...] jws_vault_name: ./vault_files/vault.keystore jws_vault_data: ./vault_files/VAULT.dat jws_vault_properties: ./vault_files/vault.properties", "[...] jws_tomcat_vault_enabled: True", "[...] jws_tomcat_vault_alias: <keystore_alias> jws_tomcat_vault_storepass: <keystore_password> jws_tomcat_vault_iteration: <iteration_count> jws_tomcat_vault_salt: <salt>", "[...] tasks: [...] - name: \"Deploy demo webapp\" ansible.builtin.get_url: url: 'https:// <url_path> / <app_name> .war' dest: \"{{ jws_home }}/webapps/ <app_name> .war\" [...]" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installing_jboss_web_server_by_using_the_red_hat_ansible_certified_content_collection/define_variables
Preface
Preface Important Support for automation services catalog is no longer available for Ansible Automation Platform from 2.4.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/getting_started_with_automation_services_catalog/pr01
Chapter 50. General Updates
Chapter 50. General Updates The TAB key does not expand USDPWD by default When working in CLI in Red Hat Enterprise Linux 6, pressing the TAB key expanded USDPWD/ to the current directory. In Red Hat Enterprise Linux 7, CLI does not have the same behavior. Users can achieve this behavior by putting the following lines into the USDHOME/.bash_profile file: (BZ#1185416) gnome-getting-started-docs-* moved to the Optional channel As of Red Hat Enterprise Linux 7.3, the gnome-getting-started-docs-* packages have been moved from the Base channel to the Optional channel. Consequently, upgrading from an earlier version of Red Hat Enterprise Linux 7 fails, if these packages were previously installed. To work around this problem, uninstall gnome-getting-started-docs-* prior to upgrading to Red Hat Enterprise Linux 7.3. (BZ#1350802) The remote-viewer SPICE client fails to detect newly plugged-in smart card readers The libcacard library in Red Hat Enterprise Linux 7.3 fails to handle USB hot plug events. As a consequence, while the remote-viewer SPICE client is running, the application in some cases fails to detect a USB smart card reader when it is plugged in. To work around the problem, remove the smart card from the reader and reinsert it. (BZ# 1249116 )
[ "if ((BASH_VERSINFO[0] >= 4)) && ((BASH_VERSINFO[1] >= 2)); then shopt -s direxpand fi" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/known_issues_general_updates
6.5. POSIX Access Control Lists
6.5. POSIX Access Control Lists Basic Linux file system permissions are assigned based on three user types: the owning user, members of the owning group, and all other users. POSIX Access Control Lists (ACLs) work around the limitations of this system by allowing administrators to also configure file and directory access permissions based on any user and any group, rather than just the owning user and group. This section covers how to view and set access control lists, and how to ensure this feature is enabled on your Red Hat Gluster Storage volumes. For more detailed information about how ACLs work, see the Red Hat Enterprise Linux 7 System Administrator's Guide : https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/ch-Access_Control_Lists.html . 6.5.1. Setting ACLs with setfacl The setfacl command lets you modify the ACLs of a specified file or directory. You can add access rules for a file with the -m subcommand, or remove access rules for a file with the -x subcommand. The basic syntax is as follows: The syntax of an access rule depends on which roles need to obey the rule. Rules for users start with u: For example, setfacl -m u:fred:rw /mnt/data gives the user fred read and write access to the /mnt/data directory. setfacl -x u::w /works_in_progress/my_presentation.txt prevents all users from writing to the /works_in_progress/my_presentation.txt file (except the owning user and members of the owning group, as these are controlled by POSIX). Rules for groups start with g: For example, setfacl -m g:admins:rwx /etc/fstab gives users in the admins group read, write, and execute permissions to the /etc/fstab file. setfacl -x g:newbies:x /mnt/harmful_script.sh prevents users in the newbies group from executing /mnt/harmful_script.sh . Rules for other users start with o: For example, setfacl -m o:r /mnt/data/public gives users without any specific rules about their username or group permission to read files in the /mnt/data/public directory . Rules for setting a maximum access level using an effective rights mask start with m: For example, setfacl -m m:r-x /mount/harmless_script.sh gives all users a maximum of read and execute access to the /mount/harmless_script.sh file. You can set the default ACLs for a directory by adding d: to the beginning of any rule, or make a rule recursive with the -R option. For example, setfacl -Rm d:g:admins:rwx /etc gives all members of the admins group read, write, and execute access to any file created under the /etc directory after the point when setfacl is run. 6.5.2. Checking current ACLs with getfacl The getfacl command lets you check the current ACLs of a file or directory. The syntax for this command is as follows: This prints a summary of current ACLs for that file. For example: If a directory has default ACLs set, these are prefixed with default: , like so: 6.5.3. Mounting volumes with ACLs enabled To mount a volume with ACLs enabled using the Native FUSE Client, use the acl mount option. For further information, see Section 6.2.3, "Mounting Red Hat Gluster Storage Volumes" . ACLs are enabled by default on volumes mounted using the NFS and SMB access protocols. To check whether ACLs are enabled on other mounted volumes, see Section 6.5.4, "Checking ACL enablement on a mounted volume" . 6.5.4. Checking ACL enablement on a mounted volume The following table shows you how to verify that ACLs are enabled on a mounted volume, based on the type of client your volume is mounted with. Table 6.10. Client type How to check Further info Native FUSE Check the output of the mount command for the default_permissions option: If default_permissions appears in the output for a mounted volume, ACLs are not enabled on that volume. Check the output of the ps aux command for the gluster FUSE mount process (glusterfs): If --acl appears in the output for a mounted volume, ACLs are enabled on that volume. See Section 6.2, "Native Client" for more information. Gluster Native NFS On the server side, check the output of the gluster volume info volname command. If nfs.acl appears in the output, that volume has ACLs disabled. If nfs.acl does not appear, ACLs are enabled (the default state). On the client side, check the output of the mount command for the volume. If noacl appears in the output, ACLs are disabled on the mount point. If this does not appear in the output, the client checks that the server uses ACLs, and uses ACLs if server support is enabled. Refer to the output of gluster volume set help pertaining to NFS, or see the Red Hat Enterprise Linux Storage Administration Guide for more information: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-nfs.html NFS Ganesha On the server side, check the volume's export configuration file, /run/gluster/shared_storage/nfs-ganesha/exports/export. volname .conf . If the Disable_ACL option is set to true , ACLs are disabled. Otherwise, ACLs are enabled for that volume. Note NFS-Ganesha supports NFSv4 protocol standardized ACLs but not NFSACL protocol used for NFSv3 mounts. Only NFSv4 mounts can set ACLs. There is no option to disable NFSv4 ACLs on the client side, so as long as the server supports ACLs, clients can set ACLs on the mount point. See Section 6.3.3, "NFS Ganesha" for more information. For client side settings, refer to the Red Hat Enterprise Linux Storage Administration Guide : https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-nfs.html samba POSIX ACLs are enabled by default when using Samba to access a Red Hat Gluster Storage volume. See Section 6.4, "SMB" for more information.
[ "setfacl subcommand access_rule file_path", "setfacl -m u: user : perms file_path", "setfacl -m g: group : perms file_path", "setfacl -m o: perms file_path", "setfacl -m m: mask file_path", "getfacl file_path", "getfacl /mnt/gluster/data/test/sample.jpg owner: antony group: antony user::rw- group::rw- other::r--", "getfacl /mnt/gluster/data/doc owner: antony group: antony user::rw- user:john:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:antony:rwx default:group::r-x default:mask::rwx default:other::r-x", "mount | grep mountpoint", "ps aux | grep gluster root 30548 0.0 0.7 548408 13868 ? Ssl 12:39 0:00 /usr/local/sbin/glusterfs --acl --volfile-server=127.0.0.2 --volfile-id=testvol /mnt/fuse_mnt" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-POSIX_Access_Control_Lists
Chapter 1. Red Hat Software Collections 3.0
Chapter 1. Red Hat Software Collections 3.0 This chapter serves as an overview of the Red Hat Software Collections 3.0 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.0 is be available for Red Hat Enterprise Linux 7; selected new components and previously released components also for Red Hat Enterprise Linux 6. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Red Hat Software Collections 3.0 provides recent stable versions of the tools listed in Table 1.1, "Red Hat Software Collections 3.0 Components" . Table 1.1. Red Hat Software Collections 3.0 Components Component Software Collection Description Red Hat Developer Toolset 7.0 devtoolset-7 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Eclipse 4.6.3 [a] rh-eclipse46 A release of the Eclipse integrated development environment that is based on the Eclipse Foundation's Neon release train. Eclipse was previously available as a Red Hat Developer Toolset component. This Software Collection depends on the rh-java-common component. Perl 5.20.1 rh-perl520 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl520 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . Also, it includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Perl 5.24.0 rh-perl524 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl524 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules. PHP 5.6.25 rh-php56 A release of PHP with PEAR 1.9.5 and enhanced language features including constant expressions, variadic functions, arguments unpacking, and the interactive debugger . The memcache , mongo , and XDebug extensions are also included. PHP 7.0.10 rh-php70 A release of PHP 7.0 with PEAR 1.10, enhanced language features and performance improvement . PHP 7.1.8 [a] rh-php71 A release of PHP 7.1 with PEAR 1.10, APCu 5.1.8 , and enhanced language features. Python 2.7.13 python27 A release of Python 2.7 with a number of additional utilities. This Python version provides various features and enhancements, including an ordered dictionary type, faster I/O operations, and improved forward compatibility with Python 3. The python27 Software Collections contains the Python 2.7.13 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), MySQL and PostgreSQL database connectors, and numpy and scipy . Python 3.4.2 rh-python34 A release of Python 3 with a number of additional utilities. This Software Collection gives developers on Red Hat Enterprise Linux access to Python 3 and allows them to benefit from various advantages and new features of this version. The rh-python34 Software Collection contains Python 3.4.2 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), PostgreSQL database connector, and numpy and scipy . Python 3.5.1 rh-python35 The rh-python35 Software Collection contains Python 3.5.1 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), PostgreSQL database connector, and numpy and scipy . Python 3.6.3 rh-python36 The rh-python36 Software Collection contains Python 3.6.3, which introduces a number of new features, such as f-strings, syntax for variable annotations, and asynchronous generators and comprehensions . In addition, a set of extension libraries useful for programming web applications is included, with mod_wsgi (supported only together with the httpd24 Software Collection), PostgreSQL database connector, and numpy and scipy . Ruby 2.2.2 rh-ruby22 A release of Ruby 2.2. This version provides substantial performance and reliability improvements, including incremental and symbol garbage collection and many others, while maintaining source level backward compatibility with Ruby 2.0.0 and Ruby 1.9.3. Ruby 2.3.1 rh-ruby23 A release of Ruby 2.3. This version introduces a command-line option to freeze all string literals in the source files, a safe navigation operator, and multiple performance enhancements , while maintaining source-level backward compatibility with Ruby 2.2.2, Ruby 2.0.0, and Ruby 1.9.3. Ruby 2.4.0 rh-ruby24 A release of Ruby 2.4. This version provides multiple performance improvements and enhancements, for example improved hash table, new debugging features, support for Unicode case mappings, and support for OpenSSL 1.1.0 . Ruby 2.4.0 maintains source-level backward compatibility with Ruby 2.3.1, Ruby 2.2.2, Ruby 2.0.0, and Ruby 1.9.3. Ruby on Rails 4.1.5 rh-ror41 A release of Ruby on Rails 4.1, a web application development framework written in the Ruby language. This version provides a number of new features including Spring application preloader, config/secrets.yml, Action Pack variants, and Action Mailer previews . This Software Collection is supported together with the rh-ruby22 Collection. Ruby on Rails 4.2.6 rh-ror42 A release of Ruby on Rails 4.2, a web application framework written in the Ruby language. Highlights in this release include Active Job, asynchronous mails, Adequate Record, Web Console, and foreign key support . This Software Collection is supported together with the rh-ruby23 and rh-nodejs4 Collections. Ruby on Rails 5.0.1 rh-ror50 A release of Ruby on Rails 5.0, the latest version of the web application framework written in the Ruby language. Notable new features include Action Cable, API mode, exclusive use of rails CLI over Rake, and ActionRecord attributes. This Software Collection is supported together with the rh-ruby24 and rh-nodejs6 Collections. Scala 2.10.6 [a] rh-scala210 A release of Scala, a general purpose programming language for the Java platform, which integrates features of object-oriented and functional languages. MariaDB 10.0.28 rh-mariadb100 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version adds the PAM authentication plugin to MariaDB. MariaDB 10.1.19 rh-mariadb101 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version adds the Galera Cluster support . MariaDB 10.2.8 rh-mariadb102 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version adds MariaDB Backup, Flashback, support for Recursive Common Table Expressions, window functions, and JSON functions . MongoDB 2.6.9 rh-mongodb26 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database . This Software Collection includes the mongo-java-driver package version 2.14.1. MongoDB 3.2.10 rh-mongodb32 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database . This Software Collection includes the mongo-java-driver package version 3.2.1. MongoDB 3.4.9 rh-mongodb34 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces support for new architectures, adds message compression and support for the decimal128 type, enhances collation features and more. MongoDB 3.0.11 upgrade collection rh-mongodb30upg A limited version of MongoDB 3.0 is available to provide an upgrade path from MongoDB 2.6 to MongoDB 3.2 for customers with existing MongoDB databases. MySQL 5.6.37 rh-mysql56 A release of MySQL, which provides a number of new features and enhancements, including improved performance. MySQL 5.7.19 rh-mysql57 A release of MySQL, which provides a number of new features and enhancements, including improved performance. PostgreSQL 9.4.14 rh-postgresql94 A release of PostgreSQL, which provides a new data type to store JSON more efficiently and a new SQL command for changing configuration files, reduces lock strength for some commands, allows materialized views without blocking concurrent reads, supports logical decoding of WAL data to allow stream changes in a customizable format and enable background worker processes to be dynamically registered, started, and terminated. PostgreSQL 9.5.9 rh-postgresql95 A release of PostgreSQL, which provides a number of enhancements, including row-level security control, introduces replication progress tracking, improves handling of large tables with high number of columns, and improves performance for sorting and multi-CPU machines. PostgreSQL 9.6.5 rh-postgresql96 A release of PostgreSQL, which introduces parallel execution of sequential scans, joins, and aggregates, and provides enhancements to synchronous replication, full-text search, deration driver, postgres_fdw, as well as performance improvements. Node.js 4.6.2 rh-nodejs4 A release of Node.js, which provides a JavaScript runtime built on Chrome's V8 JavaScript engine and npm 2.15.1 , a package manager for JavaScript. This version includes an enhanced API, multiple security and bug fixes, and support for the SPDY protocol version 3.1 Node.js 6.11.3 rh-nodejs6 A release of Node.js, which provides multiple API enhancements, performance and security improvements, ECMAScript 2015 support , and npm 3.10.9 . Node.js 8.6.0 [a] rh-nodejs8 A release of Node.js, which provides multiple API enhancements and new features, including V8 engine version 6.0, npm 5.3.0 and npx, enhanced security, experimental N-API support, and performance improvements. nginx 1.8.1 rh-nginx18 A release of nginx, a web and proxy server with a focus on high concurrency, performance and low memory usage. This version introduces a number of new features, including back-end SSL certificate verification, logging to syslog, thread pools support for offloading I/O requests, or hash load balancing method . nginx 1.10.2 rh-nginx110 A release of nginx, a web and proxy server with a focus on high concurrency, performance and low memory usage. This version introduces a number of new features, including dynamic module support, HTTP/2 support, Perl integration, and numerous performance improvements . nginx 1.12.1 [a] rh-nginx112 A release of nginx, a web and proxy server with a focus on high concurrency, performance and low memory usage. This version introduces a number of new features, including IP Transparency, improved TCP/UDP load balancing, enhanced caching performance, and numerous performance improvements . Apache httpd 2.4.27 httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb module is also included. Varnish Cache 4.0.3 rh-varnish4 A release of Varnish Cache, a high-performance HTTP reverse proxy . Varnish Cache stores files or fragments of files in memory that are used to reduce the response time and network bandwidth consumption on future equivalent requests. Maven 3.3.9 rh-maven33 A release of Maven, a software project management and comprehension tool used primarily for Java projects. This version provides various enhancements, for example, improved core extension mechanism . Maven 3.5.0 [a] rh-maven35 A release of Maven, a software project management and comprehension tool. This release introduces support for new architectures and a number of new features, including colorized logging . Passenger 4.0.50 rh-passenger40 A release of Phusion Passenger, a web and application server, designed to be fast, robust, and lightweight. It supports Ruby using the ruby193 , ruby200 , or rh-ruby22 Software Collections together with Ruby on Rails using the ror40 or rh-ror41 Collections. It can also be used with nginx 1.6 from the nginx16 Software Collection and with Apache httpd from the httpd24 Software Collection. Git 2.9.3 rh-git29 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. Redis 3.2.4 rh-redis32 A release of Redis 3.2, a persistent key-value database . Common Java Packages 1.1 rh-java-common This Software Collection provides common Java libraries and tools used by other collections. The rh-java-common Software Collection is required by the devtoolset-4 , devtoolset-3 , rh-maven33 , maven30 , rh-mongodb32 , rh-mongodb26 , thermostat1 , rh-thermostat16 , and rh-eclipse46 components and it is not supposed to be installed directly by users. V8 3.14.5.10 v8314 This Software Collection provides the V8 JavaScript engine and is supported only as a dependency for the mongodb24 , rh-mongodb26 , rh-mongodb30upg , ruby193 , ror40 , and rh-ror41 Software Collections. [a] This Software Collection is available only for Red Hat Enterprise Linux 7 Previously released Software Collections remain available in the same distribution channels. All currently available Software Collections are listed in the Table 1.2, "All Available Software Collections" . See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Components New in Red Hat Software Collections 3.0 Red Hat Developer Toolset 7.0 devtoolset-7 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le PHP 7.1.8 rh-php71 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.3 rh-python36 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.8 rh-mariadb102 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.5 rh-postgresql96 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.6.0 rh-nodejs8 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Updated in Red Hat Software Collections 3.0 Apache httpd 2.4.27 httpd24 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 RHEL7 x86_64 nginx 1.10.2 rh-nginx110 RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 RHEL6, RHEL7 x86_64 Ruby 2.4.0 rh-ruby24 RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 RHEL7 x86_64 Python 2.7.13 python27 RHEL6, RHEL7 x86_64 Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 RHEL6, RHEL7 x86_64 Common Java Packages 1.1 rh-java-common RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 RHEL6, RHEL7 x86_64 Redis 3.2.4 rh-redis32 RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 RHEL6, RHEL7 x86_64 PHP 7.0.10 rh-php70 RHEL6, RHEL7 x86_64 MySQL 5.7.19 rh-mysql57 RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 RHEL6, RHEL7 x86_64 Ruby 2.3.1 rh-ruby23 RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 RHEL6, RHEL7 x86_64 MariaDB 10.1.19 rh-mariadb101 RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 RHEL6, RHEL7 x86_64 PostgreSQL 9.5.9 rh-postgresql95 RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 RHEL6, RHEL7 x86_64 Ruby 2.2.2 rh-ruby22 RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 RHEL6, RHEL7 x86_64 MariaDB 10.0.28 rh-mariadb100 RHEL6, RHEL7 x86_64 MySQL 5.6.37 rh-mysql56 RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD64 and Intel 64 architectures s390x - IBM z Systems aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * Retired component - this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. 1.3. Changes in Red Hat Software Collections 3.0 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on AMD64 and Intel 64 architectures; certain Software Collections are available also for Red Hat Enterprise Linux 6. In addition, Red Hat Software Collections 3.0 introduces support for the following architectures on Red Hat Enterprise Linux 7: The 64-bit ARM architecture IBM z Systems IBM POWER, little endian New Software Collections Red Hat Software Collections 3.0 adds these new Software Collections: devtoolset-7 - see Section 1.3.3, "Changes in Red Hat Developer Toolset" rh-mariadb102 - see Section 1.3.4, "Changes in MariaDB" rh-maven35 - see Section 1.3.5, "Changes in Maven" rh-mongodb34 - see Section 1.3.6, "Changes in MongoDB" rh-nginx112 - see Section 1.3.7, "Changes in nginx" rh-nodejs8 - see Section 1.3.8, "Changes in Node.js" rh-php71 - see Section 1.3.9, "Changes in PHP" rh-postgresql96 - see Section 1.3.10, "Changes in PostgreSQL" rh-python36 - see Section 1.3.11, "Changes in Python" Updated Software Collections The following component has been updated in Red Hat Software Collections 3.0: httpd24 - see Section 1.3.12, "Changes in Apache httpd" Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.0: rhscl/devtoolset-7-toolchain-rhel7 rhscl/devtoolset-7-perftools-rhel7 rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/nginx-112-rhel7 rhscl/nodejs-8-rhel7 rhscl/php-71-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container image has been updated in Red Hat Software Collections 3.0: rhscl/httpd-24-rhel7 For detailed information regarding Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. General Changes The /usr/bin/scl enable command can now be used in the #! (shebang) line of a script. This enables interpreted scripts to use Python , PHP , Perl or Node.js interpreters from Software Collections. Previously, interpreted scripts could be executed only indirectly or from within the scl environment. 1.3.3. Changes in Red Hat Developer Toolset The following components have been upgraded in Red Hat Developer Toolset 7.0 compared to the release of Red Hat Developer Toolset: GCC to version 7.2.1 binutils to version 2.28 elfutils to version 0.170 make to version 4.2.1 GDB to version 8.0.1 strace to version 4.17 SystemTap to version 3.1 Valgrind to version 3.13.0 OProfile to version 1.2.0 Dyninst to version 9.3.2 For detailed information on changes in Red Hat Developer Toolset 7.0, see the Red Hat Developer Toolset User Guide . 1.3.4. Changes in MariaDB The new rh-mariadb102 Software Collection provides MariaDB 10.2.8 . The most notable changes in this version include: MariaDB Backup Flashback Support for Recursive Common Table Expressions Window functions A complete set of JSON functions The mysqlbinlog utility now supports continuous binary log backups Refer to the upstream documentation for further changes and improvements. In addition, this Software Collection includes the rh-mariadb102-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb102*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb102* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. For migration instructions, refer to Section 5.1, "Migrating to MariaDB 10.2" . 1.3.5. Changes in Maven The new rh-maven35 Software Collection includes Maven 3.5.0 , which provides a number of bug fixes and enhancements over the version. Notably, color logging on console is now supported for improved output visibility. The rh-maven35 Software Collection is available only for Red Hat Enterprise Linux 7. For detailed changes in Maven 3.5.0 , see the upstream release notes . 1.3.6. Changes in MongoDB The new rh-mongodb34 Software Collection includes MongoDB 3.4.9 , which provides a number of bug fixes and enhancements over the version. The most notable changes are: MongoDB Zones for maintaining geographic data locality, implementing tiered storage, or ensuring continuous service availability across data centers Elastic scalability, which provides faster auto-balancing of data across nodes, faster replica set synchronization, and intra-cluster network compression Tunable consistency controls, improving the way queries are routed across a distributed cluster with secondary consistency control and providing ability to configure linearizable reads The following subpackages have also been updated: mongo-cxx-driver to version 3.1.2 mongo-tools to version 3.4.7 mongo-java-driver to version 3.5.0 For detailed changes in MongoDB 3.4 , refer to the upstream release notes . Note that the rh-mongodb34-mongo-java-driver package is available only for Red Hat Enterprise Linux 7. On Red Hat Enterprise Linux 6, use the updated mongo-java-driver package from the rh-mongodb32 Software Collection instead, which has been updated through an asynchronous release. The rh-mongodb34 Software Collection does not require the rh-java-common Collection for runtime. In addition, this Software Collection includes the rh-mongodb34-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb34*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb34* packages. Note The rh-mongodb34-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . For instructions regarding migration, see Section 5.2, "Migrating to MongoDB 3.4" . 1.3.7. Changes in nginx The new rh-nginx112 Software Collection provides nginx 1.12.1 , which introduces a number of new features, including: IP Transparency Support for variables Improvements to HTTP/2 Improved TCP/UDP load balancing Enhanced caching performance Support for multiple SSL certificates of different types Enhancements to the stream module Improved support for dynamic modules Numerous performance improvements For more information regarding changes in nginx 1.12 , see the upstream release notes . The rh-nginx112 Software Collection is available only for Red Hat Enterprise Linux 7.4 and later versions. Note that the rh-nginx112 Software Collection does not support integration with Phusion Passenger . Users requiring nginx with Passenger support should continue using the rh-nginx18 Software Collection, which provides nginx version 1.8. The rh-nginx112 Software Collection has optional support for Perl in conjunction with the rh-perl524 Software Collection. To be able to configure Perl handlers and call Perl functions from SSI scripts, install the rh-nginx112-nginx-mod-http-perl package. For more information, see the upstream documentation . For migration instructions, see Section 5.5, "Migrating to nginx 1.12" . 1.3.8. Changes in Node.js The new rh-nodejs8 Software Collection includes Node.js 8.6.0 , npm 5.3.0 , and npx . This version provides numerous new features, security and bug fixes. Notable features are as follows: A new async_hooks module V8 engine version 6.0 Experimental support for N-API Support for HTTP/2 Performance improvements Node.js 8.6.0 also deprecates several modules and command-line arguments. For detailed changes, see the upstream release notes and upstream documentation . The rh-nodejs8 Software Collection is available only for Red Hat Enterprise Linux 7.4 and later versions. The rh-nodejs6 Software Collection has been upgraded to version 6.11.3 with security and bug fixes through an asynchronous update. For more information about Node.js 6.11.3 , see the upstream release notes . 1.3.9. Changes in PHP The new rh-php71 Software Collection includes PHP 7.1.8 , PEAR 1.10.4 , and the APCu extension version 5.1.8. The rh-php71 Software Collection is available only for Red Hat Enterprise Linux 7. For detailed information on bug fixes and enhancements provided by rh-php71 , see the upstream change log . For information regarding migrating from PHP 7.0 to PHP 7.1 , see the upstream migration guide . 1.3.10. Changes in PostgreSQL The new rh-postgresql96 Software Collection provides PostgreSQL 9.6.5 . The notable changes in this release include: Parallel execution of sequential scans, joins, and aggregates Enhancements to synchronous replication Improved full-text search enabling users to search for phrases The postgres_fdw data federation driver now supports remote joins, sorts, UPDATEs, and DELETEs Substantial performance improvements, especially regarding scalability on multi-CPU-socket servers For detailed changes, see the upstream documetation . In addition, this Software Collection includes the rh-postgresql96-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl96*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl96* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. For information on migration, see Section 5.4, "Migrating to PostgreSQL 9.6" . 1.3.11. Changes in Python The new rh-python36 Software Collection contains Python 3.6.3 , which introduces a number of new features, for example: Formatted string literals (f-strings) Syntax for variable annotations Asynchronous generators Asynchronous comprehensions New secrets module A new implementation of the dict mapping type - dictionaries are now faster and use 20% to 25% less memory For further enhancements and changes, refer to the upstream documentation . The rh-python36 Software Collection also provides a suite of Python libraries and tools. The most notable ones are available in the following versions: pip 9.0.1 scipy 0.19.1 numpy 1.13.1 mod_wsgi 4.5.18 (supported only together with the httpd24 Software Collection) PyMySQL 0.7.11 1.3.12. Changes in Apache httpd The httpd24 Software Collection has been upgraded to upstream version 2.4.27, which provides a number of bug fixes and enhancements over the version, including multiple improvements to HTTP/2 support. Note that in httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. For more information on changes in httpd 2.4.27 , see the upstream release notes . 1.4. Compatibility Information Red Hat Software Collections 3.0 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, the 64-bit ARM architecture, IBM z Systems, and IBM POWER, little endian. Certain components are available also for all supported releases of Red Hat Enterprise Linux 6 on AMD64 and Intel 64 architectures. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, IBM z Systems, and IBM POWER, little endian. httpd24 component, BZ# 1327548 The mod_ssl module does not support the ALPN protocol on Red Hat Enterprise Linux 6, or on Red Hat Enterprise Linux 7.3 and earlier. Consequently, clients that support upgrading TLS connections to HTTP/2 only using ALPN are limited to HTTP/1.1 support. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. rh-python34 , rh-python35 , rh-python36 components, BZ# 1499990 The pytz module, which is used by Babel for time zone support, is not included in the rh-python34 , rh-python35 , and rh-python36 Software Collections. Consequently, when the user tries to import the dates module from Babel , a traceback is returned. To work around this problem, install pytz through the pip package manager from the pypi public repository by using the pip install pytz command. rh-python36 component Certain complex trigonometric functions provided by numpy might return incorrect values on the 64-bit ARM architecture, IBM z Systems, and IBM POWER, little endian. The AMD64 and Intel 64 architectures are not affected by this problem. python27 component, BZ# 1330489 The python27-python-pymongo package has been updated to version 3.2.1. Note that this version is not fully compatible with the previously shipped version 2.5.2. python27 component In Red Hat Enterprise Linux 7, when the user tries to install the python27-python-debuginfo package, the /usr/src/debug/Python-2.7.5/Modules/socketmodule.c file conflicts with the corresponding file from the python-debuginfo package installed on the core system. Consequently, installation of the python27-python-debuginfo fails. To work around this problem, uninstall the python-debuginfo package and then install the python27-python-debuginfo package. scl-utils component Due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. rh-ruby24 , rh-ruby23 components Determination of RubyGem installation paths is dependent on the order in which multiple Software Collections are enabled. The required order has been changed since Ruby 2.3.1 shipped in Red Hat Software Collections 2.3 to support dependent Collections. As a consequence, RubyGem paths, which are used for gem installation during an RPM build, are invalid when the Software Collections are supplied in an incorrect order. For example, the build now fails if the RPM spec file contains scl enable rh-ror50 rh-nodejs6 . To work around this problem, enable the rh-ror50 Software Collection last, for example, scl enable rh-nodejs6 rh-ror50 . rh-maven35 , rh-maven33 components When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven35-maven-local package or rh-maven33-maven-local package , XMvn , a tool used for building Java RPM packages, run from the rh-maven35 or rh-maven33 Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. rh-nodejs4 component, BZ# 1316626 The /opt/rh/rh-nodejs4/root/usr/share/licenses/ directory is not owned by any package. Consequently, when the rh-nodejs4 collection is uninstalled, this directory is not removed. To work around this problem, remove the directory manually after uninstalling rh-nodejs4 . perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. nodejs010 component Shared libraries provided by the nodejs010 Software Collection, namely libcares , libhttp_parser , and libuv , are not properly prefixed with the Collection name. As a consequence, conflicts with the corresponding system libraries might occur. nodejs-hawk component The nodejs-hawk package uses an implementation of the SHA-1 and SHA-256 algorithms adopted from the CryptoJS project. In this release, the client-side JavaScript is obfuscated. The future fix will involve using crypto features directly from the CryptoJS library. postgresql component The postgresql92 , rh-postgresql94 , and rh-postgresql95 packages for Red Hat Enterprise Linux 6 do not provide the sepgsql module as this feature requires installation of libselinux version 2.0.99, which is not available in Red Hat Enterprise Linux 6. httpd , mariadb , mongodb , mysql , nodejs , perl , php55 , rh-php56 , python , ruby , ror , thermostat , and v8314 components, BZ# 1072319 When uninstalling the httpd24 , mariadb55 , rh-mariadb100 , mongodb24 , rh-mongodb26 , mysql55 , rh-mysql56 , nodejs010 , perl516 , rh-perl520 , php55 , rh-php56 , python27 , python33 , rh-python34 , ruby193 , ruby200 , rh-ruby22 , ror40 , rh-ror41 , thermostat1 , or v8314 packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. rh-mysql57 , rh-mysql56 , rh-mariadb100 , rh-mariadb101 components, BZ# 1194611 The rh-mysql57-mysql-server , rh-mysql56-mysql-server , rh-mariadb100-mariadb-server , and rh-mariadb101-mariadb-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exist. mongodb24 component The mongodb24 Software Collection from Red Hat Software Collections 1.2 cannot be rebuilt with the rh-java-common and maven30 Software Collections shipped with Red Hat Software Collections 3.0. Additionally, the mongodb24-build and mongodb24-scldevel packages cannot be installed with Red Hat Software Collections 3.0 due to unsatisfied requires on the maven30-javapackages-tools and maven30-maven-local packages . When the mongodb24-scldevel package is installed, broken dependencies are reported and the yum --skip-broken command skips too many packages. Users are advised to update to the rh-mongodb26 Software Collection. mariadb , mysql , postgresql , mongodb components Red Hat Software Collections 3.0 contains the MySQL 5.7 , MySQL 5.6 , MariaDB 10.0 , MariaDB 10.1 , PostgreSQL 9.4 , PostgreSQL 9.5 , MongoDB 2.6 , and MongoDB 3.2 databases. The core Red Hat Enterprise Linux 6 provides earlier versions of the MySQL and PostgreSQL databases (client library and daemon). The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 9.2 client library with the PostgreSQL 9.4 or 9.5 daemon works as expected. The core Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 do not include the client library for MongoDB . In order to use this client library for your application, you should use the client library from Red Hat Software Collections and always use the scl enable ... call every time you run an application linked against this MongoDB client library. mariadb , mysql , mongodb components MariaDB, MySQL, and MongoDB do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . rh-eclipse46 component When a plug-in from a third-party update site is installed, Eclipse sometimes fails to start with a NullPointerException in the workspace log file. To work around this problem, restart Eclipse with the -clean option. For example: rh-eclipse46 component The Eclipse Docker Tooling introduces a Dockerfile editor with syntax highlighting and a basic command auto-completion. When the Build Image Wizard is open and the Edit Dockerfile button is pressed, the Dockerfile editor opens the file in a detached editor window. However, this window does not contain the Cancel and Save buttons. To work around this problem, press Ctrl + S to save your changes or right-click in the editor to launch a context menu, which offers the Save option. To cancel your changes, close the window. rh-eclipse46 component On Red Hat Enterprise Linux 7.2, a bug in the perf tool, which is used to populate the Perf Profile View in Eclipse , causes some of the items in the view not to be properly linked to their respective positions in the Eclipse Editor. While the profiling works as expected, it is not possible to navigate to related positions in the Editor by clicking on parts of the Perl Profile View . rh-thermostat16 component Due to typos in the desktop application file, users are unable to launch Thermostat using the desktop icon. To work around this problem, modify the /usr/share/applications/rh-thermostat16-thermostat.desktop file from: To: Alternatively, run Thermostat from command line: Other Notes rh-ruby22 , rh-ruby23 , rh-python34 , rh-python35 , rh-php56 , rh-php70 components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby22 or rh-ruby23 Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable php54 component Note that Alternative PHP Cache (APC) in Red Hat Software Collections is provided only for user data cache. For opcode cache, Zend OPcache is provided. python component When the user tries to install more than one scldevel package from the python27 , python33 , rh-python34 , and rh-python35 Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the php54 , php55 , rh-php56 , and rh-php70 Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the ruby193 , ruby200 , rh-ruby22 , and rh-ruby23 Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the perl516 , rh-perl520 , and rh-perl524 Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the nginx16 and rh-nginx18 Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). nodejs component When installing the nodejs010 Software Collection, nodejs010 installs GCC in the base Red Hat Enterprise Linux system as a dependency, unless the gcc packages are already installed. rh-eclipse46 component The Eclipse SWT graphical library on Red Hat Enterprise Linux 7 uses GTK 3.x. Eclipse Dark Theme is not yet fully stable on GTK 3.x, so this theme is considered a Technology Preview and not supported. For more information about Red Hat Technology Previews, see https://access.redhat.com/support/offerings/techpreview/ . 1.6. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl .
[ "~]USD scl enable rh-eclipse46 \"eclipse -clean\"", "[Desktop Entry] Version=1.0 Type=Application Name=%{thermostat_desktop_app_name} Comment=A monitoring and serviceability tool for OpenJDK Exec=/opt/rh/rh-thermostat16/root/usr/share/thermostat/bin/thermostat local Icon=thermostat", "[Desktop Entry] Version=1.0 Type=Application Name=Thermostat-1.6 Comment=A monitoring and serviceability tool for OpenJDK Exec=scl enable rh-thermostat16 \"thermostat local\" Icon=rh-thermostat16-thermostat", "scl enable rh-thermostat16 \"thermostat local\"", "ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems", "Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'", "Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user", "su -l postgres -c \"scl enable rh-postgresql94 psql\"", "scl enable rh-postgresql94 bash su -l postgres -c psql" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.0_release_notes/chap-RHSCL
Chapter 11. Enabling encryption on a vSphere cluster
Chapter 11. Enabling encryption on a vSphere cluster You can encrypt your virtual machines after installing OpenShift Container Platform 4.15 on vSphere by draining and shutting down your nodes one at a time. While each virtual machine is shutdown, you can enable encryption in the vCenter web interface. 11.1. Encrypting virtual machines You can encrypt your virtual machines with the following process. You can drain your virtual machines, power them down and encrypt them using the vCenter interface. Finally, you can create a storage class to use the encrypted storage. Prerequisites You have configured a Standard key provider in vSphere. For more information, see Adding a KMS to vCenter Server . Important The Native key provider in vCenter is not supported. For more information, see vSphere Native Key Provider Overview . You have enabled host encryption mode on all of the ESXi hosts that are hosting the cluster. For more information, see Enabling host encryption mode . You have a vSphere account which has all cryptographic privileges enabled. For more information, see Cryptographic Operations Privileges . Procedure Drain and cordon one of your nodes. For detailed instructions on node management, see "Working with Nodes". Shutdown the virtual machine associated with that node in the vCenter interface. Right-click on the virtual machine in the vCenter interface and select VM Policies Edit VM Storage Policies . Select an encrypted storage policy and select OK . Start the encrypted virtual machine in the vCenter interface. Repeat steps 1-5 for all nodes that you want to encrypt. Configure a storage class that uses the encrypted storage policy. For more information about configuring an encrypted storage class, see "VMware vSphere CSI Driver Operator". 11.2. Additional resources Working with nodes vSphere encryption Requirements for encrypting virtual machines
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_vsphere/vsphere-post-installation-encryption
Installing and viewing plugins in Red Hat Developer Hub
Installing and viewing plugins in Red Hat Developer Hub Red Hat Developer Hub 1.4 Red Hat Customer Content Services
[ "kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: \"USD{GITHUB_ORG}\" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 }", "apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: my-rhdh spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh", "[ { \"name\": \"backstage-plugin-catalog-backend-module-github-dynamic\", \"version\": \"0.5.2\", \"platform\": \"node\", \"role\": \"backend-plugin-module\" }, { \"name\": \"backstage-plugin-techdocs\", \"version\": \"1.10.0\", \"role\": \"frontend-plugin\", \"platform\": \"web\" }, { \"name\": \"backstage-plugin-techdocs-backend-dynamic\", \"version\": \"1.9.5\", \"platform\": \"node\", \"role\": \"backend-plugin\" }, ]", "global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig:", "global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true", "global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false", "global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false", "apiVersion: v1 kind: Secret metadata: name: <release_name> -dynamic-plugins-npmrc 1 type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token>", "npx @janus-idp/cli@latest export-dynamic-plugin --shared-package '!/@backstage/plugin-notifications/' --embed-package @backstage/plugin-notifications-backend", "npx @janus-idp/cli@latest export-dynamic", "\"scalprum\": { \"name\": \"<package_name>\", // The Webpack container name matches the NPM package name, with \"@\" replaced by \".\" and \"/\" removed. \"exposedModules\": { \"PluginRoot\": \"./src/index.ts\" // The default module name is \"PluginRoot\" and doesn't need explicit specification in the app-config.yaml file. } }", "\"scalprum\": { \"name\": \"custom-package-name\", \"exposedModules\": { \"FooModuleName\": \"./src/foo.ts\", \"BarModuleName\": \"./src/bar.ts\" // Define multiple modules here, with each exposed as a separate entry point in the Webpack container. } }", "// For a static plugin export const EntityTechdocsContent = () => {...} // For a dynamic plugin export const DynamicEntityTechdocsContent = { element: EntityTechdocsContent, staticJSXContent: ( <TechDocsAddons> <ReportIssue /> </TechDocsAddons> ), };", "npx @janus-idp/cli@latest package export-dynamic-plugin", "npx @janus-idp/cli@latest package package-dynamic-plugins --tag quay.io/example/image:v0.0.1", "push quay.io/example/image:v0.0.1", "docker push quay.io/example/image:v0.0.1", "npm pack", "npm pack --json | head -n 10", "plugins: - package: https://example.com/backstage-plugin-myplugin-1.0.0.tgz integrity: sha512-<hash>", "npm pack --pack-destination ~/test/dynamic-plugins-root/", "project my-rhdh-project new-build httpd --name=plugin-registry --binary start-build plugin-registry --from-dir=dynamic-plugins-root --wait new-app --image-stream=plugin-registry", "plugins: - package: http://plugin-registry:8080/backstage-plugin-myplugin-1.9.6.tgz", "npm publish --registry <npm_registry_url>", "{ \"publishConfig\": { \"registry\": \"<npm_registry_url>\" } }", "plugins: - disabled: false package: oci://quay.io/example/image:v0.0.1!backstage-plugin-myplugin", "plugins: - disabled: false package: oci://quay.io/example/image@sha256:28036abec4dffc714394e4ee433f16a59493db8017795049c831be41c02eb5dc!backstage-plugin-myplugin", "plugins: - disabled: false package: https://example.com/backstage-plugin-myplugin-1.0.0.tgz integrity: sha512-9WlbgEdadJNeQxdn1973r5E4kNFvnT9GjLD627GWgrhCaxjCmxqdNW08cj+Bf47mwAtZMt1Ttyo+ZhDRDj9PoA==", "npm view --registry <registry-url> <npm package>@<version> dist.integrity", "plugins: - disabled: false package: @example/[email protected] integrity: sha512-9WlbgEdadJNeQxdn1973r5E4kNFvnT9GjLD627GWgrhCaxjCmxqdNW08cj+Bf47mwAtZMt1Ttyo+ZhDRDj9PoA==", "registry=<registry-url> //<registry-url>:_authToken=<auth-token>", "apiVersion: v1 kind: Secret metadata: name: <release_name> -dynamic-plugins-npmrc 1 type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token>", "git clone https://github.com/backstage/community-plugins cd community-plugins/workspaces/todo yarn install", "cd todo-backend npx @janus-idp/cli@latest package export-dynamic-plugin", "Building main package executing yarn build βœ” Packing main package to dist-dynamic/package.json Customizing main package in dist-dynamic/package.json for dynamic loading moving @backstage/backend-common to peerDependencies moving @backstage/backend-openapi-utils to peerDependencies moving @backstage/backend-plugin-api to peerDependencies moving @backstage/catalog-client to peerDependencies moving @backstage/catalog-model to peerDependencies moving @backstage/config to peerDependencies moving @backstage/errors to peerDependencies moving @backstage/integration to peerDependencies moving @backstage/plugin-catalog-node to peerDependencies Installing private dependencies of the main package executing yarn install --no-immutable βœ” Validating private dependencies Validating plugin entry points Saving self-contained config schema in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo-backend/dist-dynamic/dist/configSchema.json", "cd ../todo npx @janus-idp/cli@latest package export-dynamic-plugin", "No scalprum config. Using default dynamic UI configuration: { \"name\": \"backstage-community.plugin-todo\", \"exposedModules\": { \"PluginRoot\": \"./src/index.ts\" } } If you wish to change the defaults, add \"scalprum\" configuration to plugin \"package.json\" file, or use the '--scalprum-config' option to specify an external config. Packing main package to dist-dynamic/package.json Customizing main package in dist-dynamic/package.json for dynamic loading Generating dynamic frontend plugin assets in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo/dist-dynamic/dist-scalprum 263.46 kB dist-scalprum/static/1417.d5271413.chunk.js 250 B dist-scalprum/static/react-syntax-highlighter_languages_highlight_plaintext.0b7d6592.chunk.js Saving self-contained config schema in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo/dist-dynamic/dist-scalprum/configSchema.json", "cd ../.. npx @janus-idp/cli@latest package package-dynamic-plugins --tag quay.io/user/backstage-community-plugin-todo:v0.1.1", "executing podman --version βœ” Using existing 'dist-dynamic' directory at plugins/todo Using existing 'dist-dynamic' directory at plugins/todo-backend Copying 'plugins/todo/dist-dynamic' to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/backstage-community-plugin-todo No plugin configuration found at undefined create this file as needed if this plugin requires configuration Copying 'plugins/todo-backend/dist-dynamic' to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/backstage-community-plugin-todo-backend-dynamic No plugin configuration found at undefined create this file as needed if this plugin requires configuration Writing plugin registry metadata to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/index.json' Creating image using podman executing echo \"from scratch COPY . . \" | podman build --annotation com.redhat.rhdh.plugins='[{\"backstage-community-plugin-todo\":{\"name\":\"@backstage-community/plugin-todo\",\"version\":\"0.2.40\",\"description\":\"A Backstage plugin that lets you browse TODO comments in your source code\",\"backstage\":{\"role\":\"frontend-plugin\",\"pluginId\":\"todo\",\"pluginPackages\":[\"@backstage-community/plugin-todo\",\"@backstage-community/plugin-todo-backend\"]},\"homepage\":\"https://backstage.io\",\"repository\":{\"type\":\"git\",\"url\":\"https://github.com/backstage/community-plugins\",\"directory\":\"workspaces/todo/plugins/todo\"},\"license\":\"Apache-2.0\"}},{\"backstage-community-plugin-todo-backend-dynamic\":{\"name\":\"@backstage-community/plugin-todo-backend\",\"version\":\"0.3.19\",\"description\":\"A Backstage backend plugin that lets you browse TODO comments in your source code\",\"backstage\":{\"role\":\"backend-plugin\",\"pluginId\":\"todo\",\"pluginPackages\":[\"@backstage-community/plugin-todo\",\"@backstage-community/plugin-todo-backend\"]},\"homepage\":\"https://backstage.io\",\"repository\":{\"type\":\"git\",\"url\":\"https://github.com/backstage/community-plugins\",\"directory\":\"workspaces/todo/plugins/todo-backend\"},\"license\":\"Apache-2.0\"}}]' -t 'quay.io/user/backstage-community-plugin-todo:v0.1.1' -f - . βœ” Successfully built image quay.io/user/backstage-community-plugin-todo:v0.1.1 with following plugins: backstage-community-plugin-todo backstage-community-plugin-todo-backend-dynamic Here is an example dynamic-plugins.yaml for these plugins: plugins: - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo disabled: false - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo-backend-dynamic disabled: false", "podman push quay.io/user/backstage-community-plugin-todo:v0.1.1", "Getting image source signatures Copying blob sha256:86a372c456ae6a7a305cd464d194aaf03660932efd53691998ab3403f87cacb5 Copying config sha256:3b7f074856ecfbba95a77fa87cfad341e8a30c7069447de8144aea0edfcb603e Writing manifest to image destination", "packages: - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo pluginConfig: dynamicPlugins: frontend: backstage-community.plugin-todo: mountPoints: - mountPoint: entity.page.todo/cards importName: EntityTodoContent entityTabs: - path: /todo title: Todo mountPoint: entity.page.todo - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo-backend-dynamic disabled: false", "plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/installing_and_viewing_plugins_in_red_hat_developer_hub/index
Chapter 5. Managing images
Chapter 5. Managing images 5.1. Managing images overview With OpenShift Container Platform you can interact with images and set up image streams, depending on where the registries of the images are located, any authentication requirements around those registries, and how you want your builds and deployments to behave. 5.1.1. Images overview An image stream comprises any number of container images identified by tags. It presents a single virtual view of related images, similar to a container image repository. By watching an image stream, builds and deployments can receive notifications when new images are added or modified and react by performing a build or deployment, respectively. 5.2. Tagging images The following sections provide an overview and instructions for using image tags in the context of container images for working with OpenShift Container Platform image streams and their tags. 5.2.1. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 5.2.2. Image tag conventions Images evolve over time and their tags reflect this. Generally, an image tag always points to the latest image built. If there is too much information embedded in a tag name, like v2.0.1-may-2019 , the tag points to just one revision of an image and is never updated. Using default image pruning options, such an image is never removed. In very large clusters, the schema of creating new tags for every revised image could eventually fill up the etcd datastore with excess tag metadata for images that are long outdated. If the tag is named v2.0 , image revisions are more likely. This results in longer tag history and, therefore, the image pruner is more likely to remove old and unused images. Although tag naming convention is up to you, here are a few examples in the format <image_name>:<image_tag> : Table 5.1. Image tag naming conventions Description Example Revision myimage:v2.0.1 Architecture myimage:v2.0-x86_64 Base image myimage:v1.2-centos7 Latest (potentially unstable) myimage:latest Latest stable myimage:stable If you require dates in tag names, periodically inspect old and unsupported images and istags and remove them. Otherwise, you can experience increasing resource usage caused by retaining old images. 5.2.3. Adding tags to image streams An image stream in OpenShift Container Platform comprises zero or more container images identified by tags. There are different types of tags available. The default behavior uses a permanent tag, which points to a specific image in time. If the permanent tag is in use and the source changes, the tag does not change for the destination. A tracking tag means the destination tag's metadata is updated during the import of the source tag. Procedure You can add tags to an image stream using the oc tag command: USD oc tag <source> <destination> For example, to configure the ruby image stream static-2.0 tag to always refer to the current image for the ruby image stream 2.0 tag: USD oc tag ruby:2.0 ruby:static-2.0 This creates a new image stream tag named static-2.0 in the ruby image stream. The new tag directly references the image id that the ruby:2.0 image stream tag pointed to at the time oc tag was run, and the image it points to never changes. To ensure the destination tag is updated when the source tag changes, use the --alias=true flag: USD oc tag --alias=true <source> <destination> Note Use a tracking tag for creating permanent aliases, for example, latest or stable . The tag only works correctly within a single image stream. Trying to create a cross-image stream alias produces an error. You can also add the --scheduled=true flag to have the destination tag be refreshed, or re-imported, periodically. The period is configured globally at the system level. The --reference flag creates an image stream tag that is not imported. The tag points to the source location, permanently. If you want to instruct OpenShift Container Platform to always fetch the tagged image from the integrated registry, use --reference-policy=local . The registry uses the pull-through feature to serve the image to the client. By default, the image blobs are mirrored locally by the registry. As a result, they can be pulled more quickly the time they are needed. The flag also allows for pulling from insecure registries without a need to supply --insecure-registry to the container runtime as long as the image stream has an insecure annotation or the tag has an insecure import policy. 5.2.4. Removing tags from image streams You can remove tags from an image stream. Procedure To remove a tag completely from an image stream run: USD oc delete istag/ruby:latest or: USD oc tag -d ruby:latest 5.2.5. Referencing images in imagestreams You can use tags to reference images in image streams using the following reference types. Table 5.2. Imagestream reference types Reference type Description ImageStreamTag An ImageStreamTag is used to reference or retrieve an image for a given image stream and tag. ImageStreamImage An ImageStreamImage is used to reference or retrieve an image for a given image stream and image sha ID. DockerImage A DockerImage is used to reference or retrieve an image for a given external registry. It uses standard Docker pull specification for its name. When viewing example image stream definitions you may notice they contain definitions of ImageStreamTag and references to DockerImage , but nothing related to ImageStreamImage . This is because the ImageStreamImage objects are automatically created in OpenShift Container Platform when you import or tag an image into the image stream. You should never have to explicitly define an ImageStreamImage object in any image stream definition that you use to create image streams. Procedure To reference an image for a given image stream and tag, use ImageStreamTag : To reference an image for a given image stream and image sha ID, use ImageStreamImage : The <id> is an immutable identifier for a specific image, also called a digest. To reference or retrieve an image for a given external registry, use DockerImage : Note When no tag is specified, it is assumed the latest tag is used. You can also reference a third-party registry: Or an image with a digest: 5.3. Image pull policy Each container in a pod has a container image. After you have created an image and pushed it to a registry, you can then refer to it in the pod. 5.3.1. Image pull policy overview When OpenShift Container Platform creates containers, it uses the container imagePullPolicy to determine if the image should be pulled prior to starting the container. There are three possible values for imagePullPolicy : Table 5.3. imagePullPolicy values Value Description Always Always pull the image. IfNotPresent Only pull the image if it does not already exist on the node. Never Never pull the image. If a container imagePullPolicy parameter is not specified, OpenShift Container Platform sets it based on the image tag: If the tag is latest , OpenShift Container Platform defaults imagePullPolicy to Always . Otherwise, OpenShift Container Platform defaults imagePullPolicy to IfNotPresent . 5.4. Using image pull secrets If you are using the OpenShift image registry and are pulling from image streams located in the same project, then your pod service account should already have the correct permissions and no additional action should be required. However, for other scenarios, such as referencing images across OpenShift Container Platform projects or from secured registries, additional configuration steps are required. You can obtain the image pull secret from Red Hat OpenShift Cluster Manager . This pull secret is called pullSecret . You use this pull secret to authenticate with the services that are provided by the included authorities, Quay.io and registry.redhat.io , which serve the container images for OpenShift Container Platform components. 5.4.1. Allowing pods to reference images across projects When using the OpenShift image registry, to allow pods in project-a to reference images in project-b , a service account in project-a must be bound to the system:image-puller role in project-b . Note When you create a pod service account or a namespace, wait until the service account is provisioned with a docker pull secret; if you create a pod before its service account is fully provisioned, the pod fails to access the OpenShift image registry. Procedure To allow pods in project-a to reference images in project-b , bind a service account in project-a to the system:image-puller role in project-b : USD oc policy add-role-to-user \ system:image-puller system:serviceaccount:project-a:default \ --namespace=project-b After adding that role, the pods in project-a that reference the default service account are able to pull images from project-b . To allow access for any service account in project-a , use the group: USD oc policy add-role-to-group \ system:image-puller system:serviceaccounts:project-a \ --namespace=project-b 5.4.2. Allowing pods to reference images from other secured registries To pull a secured container from other private or secured registries, you must create a pull secret from your container client credentials, such as Docker or Podman, and add it to your service account. Both Docker and Podman use a configuration file to store authentication details to log in to secured or insecure registry: Docker : By default, Docker uses USDHOME/.docker/config.json . Podman : By default, Podman uses USDHOME/.config/containers/auth.json . These files store your authentication information if you have previously logged in to a secured or insecure registry. Note Both Docker and Podman credential files and the associated pull secret can contain multiple references to the same registry if they have unique paths, for example, quay.io and quay.io/<example_repository> . However, neither Docker nor Podman support multiple entries for the exact same registry path. Example config.json file { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io/repository-main":{ "auth":"b3Blb=", "email":"[email protected]" } } } Example pull secret apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: "2021-09-09T19:10:11Z" name: pull-secret namespace: default resourceVersion: "37676" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque 5.4.2.1. Creating a pull secret Procedure Create a secret from an existing authentication file: For Docker clients using .docker/config.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson For Podman clients using .config/containers/auth.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=<path/to/.config/containers/auth.json> \ --type=kubernetes.io/podmanconfigjson If you do not already have a Docker credentials file for the secured registry, you can create a secret by running the following command: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> 5.4.2.2. Using a pull secret in a workload You can use a pull secret to allow workloads to pull images from a private registry with one of the following methods: By linking the secret to a ServiceAccount , which automatically applies the secret to all pods using that service account. By defining imagePullSecrets directly in workload configurations, which is useful for environments like GitOps or ArgoCD. Procedure You can use a secret for pulling images for pods by adding the secret to your service account. Note that the name of the service account should match the name of the service account that pod uses. The default service account is default . Enter the following command to link the pull secret to a ServiceAccount : USD oc secrets link default <pull_secret_name> --for=pull To verify the change, enter the following command: USD oc get serviceaccount default -o yaml Example output apiVersion: v1 imagePullSecrets: - name: default-dockercfg-123456 - name: <pull_secret_name> kind: ServiceAccount metadata: annotations: openshift.io/internal-registry-pull-secret-ref: <internal_registry_pull_secret> creationTimestamp: "2025-03-03T20:07:52Z" name: default namespace: default resourceVersion: "13914" uid: 9f62dd88-110d-4879-9e27-1ffe269poe3 secrets: - name: <pull_secret_name> Instead of linking the secret to a service account, you can alternatively reference it directly in your pod or workload definition. This is useful for GitOps workflows such as ArgoCD. For example: Example pod specification apiVersion: v1 kind: Pod metadata: name: <secure_pod_name> spec: containers: - name: <container_name> image: quay.io/my-private-image imagePullSecrets: - name: <pull_secret_name> Example ArgoCD workflow apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: <example_workflow> spec: entrypoint: <main_task> imagePullSecrets: - name: <pull_secret_name> 5.4.2.3. Pulling from private registries with delegated authentication A private registry can delegate authentication to a separate service. In these cases, image pull secrets must be defined for both the authentication and registry endpoints. Procedure Create a secret for the delegated authentication server: USD oc create secret docker-registry \ --docker-server=sso.redhat.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ redhat-connect-sso secret/redhat-connect-sso Create a secret for the private registry: USD oc create secret docker-registry \ --docker-server=privateregistry.example.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ private-registry secret/private-registry 5.4.3. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. Important To transfer your cluster to another owner, you must first initiate the transfer in OpenShift Cluster Manager , and then update the pull secret on the cluster. Updating a cluster's pull secret without initiating the transfer in OpenShift Cluster Manager causes the cluster to stop reporting Telemetry metrics in OpenShift Cluster Manager. For more information about transferring cluster ownership , see "Transferring cluster ownership" in the Red Hat OpenShift Cluster Manager documentation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot.
[ "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "oc tag <source> <destination>", "oc tag ruby:2.0 ruby:static-2.0", "oc tag --alias=true <source> <destination>", "oc delete istag/ruby:latest", "oc tag -d ruby:latest", "<image_stream_name>:<tag>", "<image_stream_name>@<id>", "openshift/ruby-20-centos7:2.0", "registry.redhat.io/rhel7:latest", "centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e", "oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b", "oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "oc create secret generic <pull_secret_name> --from-file=<path/to/.config/containers/auth.json> --type=kubernetes.io/podmanconfigjson", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "oc secrets link default <pull_secret_name> --for=pull", "oc get serviceaccount default -o yaml", "apiVersion: v1 imagePullSecrets: - name: default-dockercfg-123456 - name: <pull_secret_name> kind: ServiceAccount metadata: annotations: openshift.io/internal-registry-pull-secret-ref: <internal_registry_pull_secret> creationTimestamp: \"2025-03-03T20:07:52Z\" name: default namespace: default resourceVersion: \"13914\" uid: 9f62dd88-110d-4879-9e27-1ffe269poe3 secrets: - name: <pull_secret_name>", "apiVersion: v1 kind: Pod metadata: name: <secure_pod_name> spec: containers: - name: <container_name> image: quay.io/my-private-image imagePullSecrets: - name: <pull_secret_name>", "apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: <example_workflow> spec: entrypoint: <main_task> imagePullSecrets: - name: <pull_secret_name>", "oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso", "oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/images/managing-images
Chapter 6. Virtual Private Networks
Chapter 6. Virtual Private Networks Organizations with several satellite offices often connect to each other with dedicated lines for efficiency and protection of sensitive data in transit. For example, many businesses use frame relay or Asynchronous Transfer Mode (ATM) lines as an end-to-end networking solution to link one office with others. This can be an expensive proposition, especially for small to medium sized businesses (SMBs) that want to expand without paying the high costs associated with enterprise-level, dedicated digital circuits. To address this need, Virtual Private Networks ( VPN s) were developed. Following the same functional principles as dedicated circuits, VPNs allow for secured digital communication between two parties (or networks), creating a Wide Area Network (WAN) from existing Local Area Networks ( LAN s). Where it differs from frame relay or ATM is in its transport medium. VPNs transmit over IP using datagrams as the transport layer, making it a secure conduit through the Internet to an intended destination. Most free software VPN implementations incorporate open standard encryption methods to further mask data in transit. Some organizations employ hardware VPN solutions to augment security, while others use the software or protocol-based implementations. There are several vendors with hardware VPN solutions such as Cisco, Nortel, IBM, and Checkpoint. There is a free software-based VPN solution for Linux called FreeS/Wan that utilizes a standardized IPsec (or Internet Protocol Security) implementation. These VPN solutions, regardless if hardware or software based, act as specialized routers that sit between the IP connection from one office to another. When a packet is transmitted from a client, it sends it through the router or gateway, which then adds header information for routing and authentication called the Authentication Header ( AH ). The data is encrypted and is enclosed with decryption and handling instruction called the Encapsulating Security Payload ( ESP ). The receiving VPN router strips the header information, decrypts the data, and routes it to its intended destination (either a workstation or node on a network). Using a network-to-network connection, the receiving node on the local network receives the packets decrypted and ready for processing. The encryption/decryption process in a network-to-network VPN connection is transparent to a local node. With such a heightened level of security, a cracker must not only intercept a packet, but decrypt the packet as well. Intruders who employ a man-in-the-middle attack between a server and client must also have access to at least one of the private keys for authenticating sessions. Because they employ several layers of authentication and encryption, VPNs are a secure and effective means to connect multiple remote nodes to act as a unified Intranet. 6.1. VPNs and Red Hat Enterprise Linux Red Hat Enterprise Linux users have various options in terms of implementing a software solution to securely connect to their WAN. Internet Protocol Security , or IPsec is the supported VPN implementation for Red Hat Enterprise Linux that sufficiently addresses the usability needs of organizations with branch offices or remote users.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-vpn
Chapter 2. Managing compute machines with the Machine API
Chapter 2. Managing compute machines with the Machine API 2.1. Creating a compute machine set on Alibaba Cloud You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Alibaba Cloud. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.1.1. Sample YAML for a compute machine set custom resource on Alibaba Cloud This sample YAML defines a compute machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: "" zoneId: <zone> 21 1 5 7 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 Specify the node label to add. 4 6 10 Specify the infrastructure ID, node label, and zone. 11 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 12 Specify the instance type you want to use for the compute machine set. 13 Specify the name of the RAM role to use for the compute machine set. Use the value that the installer populates in the default compute machine set. 14 Specify the region to place machines on. 15 Specify the resource group and type for the cluster. You can use the value that the installer populates in the default compute machine set, or specify a different one. 16 18 20 Specify the tags to use for the compute machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default compute machine set it creates, as needed. 17 Specify the type and size of the root disk. Use the category value that the installer populates in the default compute machine set it creates. If required, specify a different value in gigabytes for size . 19 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 2.1.1.1. Machine set parameters for Alibaba Cloud usage statistics The default compute machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups , tag , and vSwitch parameters of the spec.template.spec.providerSpec.value list. When creating compute machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the compute machine sets you create. You can also include additional tags as needed. The following YAML snippets indicate which tags in the default compute machine sets are optional and which are required. Tags in spec.template.spec.providerSpec.value.securityGroups spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags 1 2 Optional: This tag is applied even when not specified in the compute machine set. 3 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <role> is the node label to add. Tags in spec.template.spec.providerSpec.value.tag spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp 2 3 Optional: This tag is applied even when not specified in the compute machine set. 1 Required. where <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. Tags in spec.template.spec.providerSpec.value.vSwitch spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags 1 2 3 Optional: This tag is applied even when not specified in the compute machine set. 4 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <zone> is the zone within your region to place machines on. 2.1.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.2. Creating a compute machine set on AWS You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.2.1. Sample YAML for a compute machine set custom resource on AWS The sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) Local Zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data 1 3 5 11 14 16 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID, role node label, and zone. 6 7 9 Specify the role node label to add. 10 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 17 18 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 12 Specify the zone, for example, us-east-1a . 13 Specify the region, for example, us-east-1 . 15 Specify the infrastructure ID and zone. 2.2.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets. Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.2.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.2.4. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group. EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group. Prerequisites You created a placement group in the AWS console. Note Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 # ... 1 Specify an instance type that supports EFAs . 2 Specify the EFA network interface type. 3 Specify the zone, for example, us-east-1a . 4 Specify the region, for example, us-east-1 . 5 Specify the name of the existing AWS placement group to deploy machines in. Verification In the AWS console, find a machine that the machine set created and verify the following in the machine properties: The placement group field has the value that you specified for the placementGroupName parameter in the machine set. The interface type field indicates that it uses an EFA. 2.2.5. Machine set options for the Amazon EC2 Instance Metadata Service You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2. Note Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later. To deploy new compute machines with your preferred IMDS configuration, create a compute machine set YAML file with the appropriate values. You can also edit an existing machine set to create new machines with your preferred IMDS configuration when the machine set is scaled up. Important Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. 2.2.5.1. Configuring IMDS by using machine sets You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines. Prerequisites To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later. Procedure Add or edit the following lines under the providerSpec field: providerSpec: value: metadataServiceOptions: authentication: Required 1 1 To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. 2.2.6. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 2.2.6.1. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 2.2.7. Machine sets that deploy machines as Spot Instances You can save on costs by creating a compute machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when AWS issues the termination warning. Interruptions can occur when using Spot Instances for the following reasons: The instance price exceeds your maximum price The demand for Spot Instances increases The supply of Spot Instances decreases When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot Instance. 2.2.7.1. Creating Spot Instances by using compute machine sets You can launch a Spot Instance on AWS by adding spotMarketOptions to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotMarketOptions: {} You can optionally set the spotMarketOptions.maxPrice field to limit the cost of the Spot Instance. For example you can set maxPrice: '2.50' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price. Note It is strongly recommended to use the default On-Demand price as the maxPrice value and to not set the maximum price for Spot Instances. 2.2.8. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the AWS EC2 cloud provider. For more information about the supported instance types, see the following NVIDIA documentation: NVIDIA GPU Operator Community support matrix NVIDIA AI Enterprise support matrix Procedure View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific AWS region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.28.5 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.28.5 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.28.5 View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones. USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h View the machines that exist in the openshift-machine-api namespace by running the following command. At this time, there is only one compute machine per machine set, though a compute machine set could be scaled to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api | grep worker Example output preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json> Edit the JSON file and make the following changes to the new MachineSet definition: Replace worker with gpu . This will be the name of the new machine set. Change the instance type of the new MachineSet definition to g4dn , which includes an NVIDIA Tesla T4 GPU. To learn more about AWS g4dn instance types, see Accelerated Computing . USD jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json "g4dn.xlarge" The <output_file.json> file is saved as preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json . Update the following fields in preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json : .metadata.name to a name containing gpu . .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . .spec.template.spec.providerSpec.value.instanceType to g4dn.xlarge . To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json - Example output 10c10 < "name": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a", --- > "name": "preserve-dsoc12r4-ktjfc-worker-us-east-2a", 21c21 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 31c31 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 60c60 < "instanceType": "g4dn.xlarge", --- > "instanceType": "m5.xlarge", Create the GPU-enabled compute machine set from the definition by running the following command: USD oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json Example output machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created Verification View the machine set you created by running the following command: USD oc -n openshift-machine-api get machinesets | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. 2.2.9. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. 2.3. Creating a compute machine set on Azure You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.3.1. Sample YAML for a compute machine set custom resource on Azure This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 Specify the node label to add. 3 Specify the infrastructure ID, node label, and region. 4 Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering". 5 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 6 Specify the region to place machines on. 7 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 8 Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify. Important If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region. 2.3.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.3.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.3.4. Using the Azure Marketplace offering You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.13. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700 2.3.5. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 2.3.6. Machine sets that deploy machines as Spot VMs You can save on costs by creating a compute machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when Azure issues the termination warning. Interruptions can occur when using Spot VMs for the following reasons: The instance price exceeds your maximum price The supply of Spot VMs decreases Azure needs capacity back When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot VM. 2.3.6.1. Creating Spot VMs by using compute machine sets You can launch a Spot VM on Azure by adding spotVMOptions to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotVMOptions: {} You can optionally set the spotVMOptions.maxPrice field to limit the cost of the Spot VM. For example you can set maxPrice: '0.98765' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to -1 and charges up to the standard VM price. Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default maxPrice . However, an instance can still be evicted due to capacity restrictions. Note It is strongly recommended to use the default standard VM price as the maxPrice value and to not set the maximum price for Spot VMs. 2.3.7. Machine sets that deploy machines on Ephemeral OS disks You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging. Additional resources For more information, see the Microsoft Azure documentation about Ephemeral OS disks for Azure VMs . 2.3.7.1. Creating machines on Ephemeral OS disks by using compute machine sets You can launch machines on Ephemeral OS disks on Azure by editing your compute machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Edit the custom resource (CR) by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the compute machine set that you want to provision machines on Ephemeral OS disks. Add the following to the providerSpec field: providerSpec: value: ... osDisk: ... diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4 ... 1 2 3 These lines enable the use of Ephemeral OS disks. 4 Ephemeral OS disks are only supported for VMs or scale set instances that use the Standard LRS storage account type. Important The implementation of Ephemeral OS disk support in OpenShift Container Platform only supports the CacheDisk placement type. Do not change the placement configuration setting. Create a compute machine set using the updated configuration: USD oc create -f <machine-set-config>.yaml Verification On the Microsoft Azure portal, review the Overview page for a machine deployed by the compute machine set, and verify that the Ephemeral OS disk field is set to OS cache placement . 2.3.8. Machine sets that deploy machines with ultra disks as data disks You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. You can also create a persistent volume claim (PVC) that dynamically binds to a storage class backed by Azure ultra disks and mounts them to pods. Note Data disks do not support the ability to specify disk throughput or disk IOPS. You can configure these properties by using PVCs. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using CSI PVCs Machine sets that deploy machines on ultra disks using in-tree PVCs 2.3.8.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Create a custom secret in the openshift-machine-api namespace using the worker data secret by running the following command: USD oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2 1 Replace <role> with worker . 2 Specify userData.txt as the name of the new custom secret. In a text editor, open the userData.txt file and locate the final } character in the file. On the immediately preceding line, add a , . Create a new line after the , and add the following configuration details: "storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] } 1 The configuration details for the disk that you want to attach to a node as an ultra disk. 2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0 , specify lun0 . You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set. 3 The configuration details for a new partition on the disk. 4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0 . 5 Specify the total size in MiB of the partition. 6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. 7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each. 8 For Where , specify the value of storage.filesystems.path . For What , specify the value of storage.filesystems.device . Extract the disabling template value to a file called disableTemplating.txt by running the following command: USD oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt 1 Replace <role> with worker . Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command: USD oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt 1 For <role>-user-data-x5 , specify the name of the secret. Replace <role> with worker . Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 3 These lines enable the use of ultra disks. For dataDisks , include the entire stanza. 4 Specify the user data secret created earlier. Replace <role> with worker . Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 2.3.8.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 2.3.8.2.1. Incorrect ultra disk configuration If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails. For example, if the ultraSSDCapability parameter is set to Disabled , but an ultra disk is specified in the dataDisks parameter, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, verify that your machine set configuration is correct. 2.3.8.2.2. Unsupported disk parameters If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message: failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>." To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct. 2.3.8.2.3. Unable to delete disks If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired. 2.3.9. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 2.3.10. Configuring trusted launch for Azure virtual machines by using machine sets Important Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.15 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Note Some feature combinations result in an invalid configuration. Table 2.1. UEFI feature combination compatibility Secure Boot [1] vTPM [2] Valid configuration Enabled Enabled Yes Enabled Disabled Yes Enabled Omitted Yes Disabled Enabled Yes Omitted Enabled Yes Disabled Disabled No Omitted Disabled No Omitted Omitted No Using the secureBoot field. Using the virtualizedTrustedPlatformModule field. For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field to provide a valid configuration: Sample valid configuration with UEFI Secure Boot and vTPM enabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. 2 Specifies which UEFI security features to use. This section is required for all valid configurations. 3 Enables UEFI Secure Boot. 4 Enables the use of a vTPM. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. 2.3.11. Configuring Azure confidential virtual machines by using machine sets Important Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.15 supports Azure confidential virtual machines (VMs). Note Confidential VMs are currently not supported on 64-bit ARM architectures. By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: osDisk: # ... managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # ... securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8 # ... 1 Specifies security profile settings for the managed disk when using a confidential VM. 2 Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. 3 Specifies security profile settings for the confidential VM. 4 Enables the use of confidential VMs. This value is required for all valid configurations. 5 Specifies which UEFI security features to use. This section is required for all valid configurations. 6 Disables UEFI Secure Boot. 7 Enables the use of a vTPM. 8 Specifies an instance type that supports confidential VMs. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. 2.3.12. Accelerated Networking for Microsoft Azure VMs Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled during or after installation. 2.3.12.1. Limitations Consider the following limitations when deciding whether to use Accelerated Networking: Accelerated Networking is only supported on clusters where the Machine API is operational. Although the minimum requirement for an Azure worker node is two vCPUs, Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation . When this feature is enabled on an existing Azure cluster, only newly provisioned nodes are affected. Currently running nodes are not reconciled. To enable the feature on all nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas. 2.3.13. Configuring Capacity Reservation by using machine sets OpenShift Container Platform version 4.15.25 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds. For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation . Note You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the machine set deployed. Prerequisites You have access to the cluster with cluster-admin privileges. You installed the OpenShift CLI ( oc ). You created a Capacity Reservation group. For more information, see the Microsoft Azure documentation Create a Capacity Reservation . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1 # ... 1 Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on. Verification To verify machine deployment, list the machines that the machine set created by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> where <machine_set_name> is the name of the compute machine set. In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation. 2.3.14. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the Azure cloud provider. The following table lists the validated instance types: vmSize NVIDIA GPU accelerator Maximum number of GPUs Architecture Standard_NC24s_v3 V100 4 x86 Standard_NC4as_T4_v3 T4 1 x86 ND A100 v4 A100 8 x86 Note By default, Azure subscriptions do not have a quota for the Azure instance types with GPU. Customers have to request a quota increase for the Azure instance families listed above. Procedure View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones. USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m Make a copy of one of the existing compute MachineSet definitions and output the result to a YAML file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml View the content of the machineset: USD cat machineset-azure.yaml Example machineset-azure.yaml file apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "0" machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T14:08:19Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: "23601" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1 Make a copy of the machineset-azure.yaml file by running the following command: USD cp machineset-azure.yaml machineset-azure-gpu.yaml Update the following fields in machineset-azure-gpu.yaml : Change .metadata.name to a name containing gpu . Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name. Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.spec.providerSpec.value.vmSize to Standard_NC4as_T4_v3 . Example machineset-azure-gpu.yaml file apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "1" machine.openshift.io/memoryMb: "28672" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T20:27:12Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: "166285" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1 To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD diff machineset-azure.yaml machineset-azure-gpu.yaml Example output 14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3 Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f machineset-azure-gpu.yaml Example output machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones. USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific Azure region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.28.5 myclustername-master-1 Ready control-plane,master 6h41m v1.28.5 myclustername-master-2 Ready control-plane,master 6h39m v1.28.5 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.28.5 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.28.5 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.28.5 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.28.5 View the list of compute machine sets: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f machineset-azure-gpu.yaml View the list of compute machine sets: oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h Verification View the machine set you created by running the following command: USD oc get machineset -n openshift-machine-api | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m Note There is no need to specify a namespace for the node. The node definition is cluster scoped. 2.3.15. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. Additional resources Enabling Accelerated Networking during installation 2.3.15.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster where the Machine API is operational. Procedure Add the following to the providerSpec field: providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2 1 This line enables Accelerated Networking. 2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation . steps To enable the feature on currently running nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas. Verification On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled . Additional resources Manually scaling a compute machine set 2.4. Creating a compute machine set on Azure Stack Hub You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure Stack Hub. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.4.1. Sample YAML for a compute machine set custom resource on Azure Stack Hub This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: "1" 21 1 5 7 13 15 16 17 20 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 18 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID, node label, and region. 14 Specify the region to place machines on. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 12 Specify the availability set for the cluster. 2.4.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Create an availability set in which to deploy Azure Stack Hub compute machines. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <availabilitySet> , <clusterID> , and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.4.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.4.4. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure Stack Hub cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 2.4.5. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 2.5. Creating a compute machine set on GCP You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.5.1. Sample YAML for a compute machine set custom resource on GCP This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" , where <role> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a Sample GCP MachineSet values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 For <node> , specify the node label to add. 3 Specify the path to the image that is used in current compute machine sets. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 2.5.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.5.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.5.4. Configuring persistent disk types by using machine sets You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file. For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following line under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1 1 Specify the persistent disk type. Valid values are pd-ssd , pd-standard , and pd-balanced . The default value is pd-standard . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type. 2.5.5. Configuring Confidential VM by using machine sets By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys. For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM . Note Confidential VMs are currently not supported on 64-bit ARM architectures. Important OpenShift Container Platform 4.15 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP). Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ... 1 Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled . 2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VM does not support live VM migration. 3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types. Verification On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured. 2.5.6. Machine sets that deploy machines as preemptible VM instances You can save on costs by creating a compute machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED state by Compute Engine. Interruptions can occur when using preemptible VM instances for the following reasons: There is a system or maintenance event The supply of preemptible VM instances decreases The instance reaches the end of the allotted 24-hour period for preemptible VM instances When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a preemptible VM instance. 2.5.6.1. Creating preemptible VM instances by using compute machine sets You can launch a preemptible VM instance on GCP by adding preemptible to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: preemptible: true If preemptible is set to true , the machine is labelled as an interruptable-instance after the instance is launched. 2.5.7. Configuring Shielded VM options by using machine sets By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys. For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 In this section, specify any Shielded VM options that you want. 2 Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled . Note When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM). 3 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled . 4 Specify whether vTPM is enabled. Valid values are Disabled or Enabled . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured. Additional resources What is Shielded VM? Secure Boot Virtual Trusted Platform Module (vTPM) Integrity monitoring 2.5.8. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location: USD gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 2.5.9. Enabling GPU support for a compute machine set Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OpenShift Container Platform on GCP supports NVIDIA GPU models in the A2 and N1 machine series. Table 2.2. Supported GPU configurations Model name GPU type Machine types [1] NVIDIA A100 nvidia-tesla-a100 a2-highgpu-1g a2-highgpu-2g a2-highgpu-4g a2-highgpu-8g a2-megagpu-16g NVIDIA K80 nvidia-tesla-k80 n1-standard-1 n1-standard-2 n1-standard-4 n1-standard-8 n1-standard-16 n1-standard-32 n1-standard-64 n1-standard-96 n1-highmem-2 n1-highmem-4 n1-highmem-8 n1-highmem-16 n1-highmem-32 n1-highmem-64 n1-highmem-96 n1-highcpu-2 n1-highcpu-4 n1-highcpu-8 n1-highcpu-16 n1-highcpu-32 n1-highcpu-64 n1-highcpu-96 NVIDIA P100 nvidia-tesla-p100 NVIDIA P4 nvidia-tesla-p4 NVIDIA T4 nvidia-tesla-t4 NVIDIA V100 nvidia-tesla-v100 For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about N1 machine series , A2 machine series , and GPU regions and zones availability . You can define which supported GPU to use for an instance by using the Machine API. You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators. Note GPUs for graphics workloads are not supported. Procedure In a text editor, open the YAML file for an existing compute machine set or create a new one. Specify a GPU configuration under the providerSpec field in your compute machine set YAML file. See the following examples of valid configurations: Example configuration for the A2 machine series: providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3 1 Specify the machine type. Ensure that the machine type is included in the A2 machine series. 2 When using GPU support, you must set onHostMaintenance to Terminate . 3 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never . Example configuration for the N1 machine series: providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5 1 Specify the number of GPUs to attach to the machine. 2 Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible. 3 Specify the machine type. Ensure that the machine type and GPU type are compatible. 4 When using GPU support, you must set onHostMaintenance to Terminate . 5 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never . 2.5.10. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the GCP cloud provider. The following table lists the validated instance types: Instance type NVIDIA GPU accelerator Maximum number of GPUs Architecture a2-highgpu-1g A100 1 x86 n1-standard-4 T4 1 x86 Procedure Make a copy of an existing MachineSet . In the new copy, change the machine set name in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset . Change the instance type to add the following two lines to the newly copied MachineSet : Example a2-highgpu-1g.json file { "apiVersion": "machine.openshift.io/v1beta1", "kind": "MachineSet", "metadata": { "annotations": { "machine.openshift.io/GPU": "0", "machine.openshift.io/memoryMb": "16384", "machine.openshift.io/vCPU": "4" }, "creationTimestamp": "2023-01-13T17:11:02Z", "generation": 1, "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p" }, "name": "myclustername-2pt9p-worker-gpu-a", "namespace": "openshift-machine-api", "resourceVersion": "20185", "uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd" }, "spec": { "replicas": 1, "selector": { "matchLabels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "template": { "metadata": { "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machine-role": "worker", "machine.openshift.io/cluster-api-machine-type": "worker", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "spec": { "lifecycleHooks": {}, "metadata": {}, "providerSpec": { "value": { "apiVersion": "machine.openshift.io/v1beta1", "canIPForward": false, "credentialsSecret": { "name": "gcp-cloud-credentials" }, "deletionProtection": false, "disks": [ { "autoDelete": true, "boot": true, "image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64", "labels": null, "sizeGb": 128, "type": "pd-ssd" } ], "kind": "GCPMachineProviderSpec", "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", "metadata": { "creationTimestamp": null }, "networkInterfaces": [ { "network": "myclustername-2pt9p-network", "subnetwork": "myclustername-2pt9p-worker-subnet" } ], "preemptible": true, "projectID": "myteam", "region": "us-central1", "serviceAccounts": [ { "email": "[email protected]", "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] } ], "tags": [ "myclustername-2pt9p-worker" ], "userDataSecret": { "name": "worker-user-data" }, "zone": "us-central1-a" } } } } }, "status": { "availableReplicas": 1, "fullyLabeledReplicas": 1, "observedGeneration": 1, "readyReplicas": 1, "replicas": 1 } } View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific GCP region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.28.5 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.28.5 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.28.5 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.28.5 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.28.5 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.28.5 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.28.5 View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the GCP region. The installer automatically load balances compute machines across availability zones. USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api | grep worker Example output myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json> Edit the JSON file to make the following changes to the new MachineSet definition: Rename the machine set name by inserting the substring gpu in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset . Change the machineType of the new MachineSet definition to a2-highgpu-1g , which includes an NVIDIA A100 GPU. jq .spec.template.spec.providerSpec.value.machineType ocp_4.15_machineset-a2-highgpu-1g.json "a2-highgpu-1g" The <output_file.json> file is saved as ocp_4.15_machineset-a2-highgpu-1g.json . Update the following fields in ocp_4.15_machineset-a2-highgpu-1g.json : Change .metadata.name to a name containing gpu . Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.spec.providerSpec.value.MachineType to a2-highgpu-1g . Add the following line under machineType : `"onHostMaintenance": "Terminate". For example: "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.15_machineset-a2-highgpu-1g.json - Example output 15c15 < "name": "myclustername-2pt9p-worker-gpu-a", --- > "name": "myclustername-2pt9p-worker-a", 25c25 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 34c34 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 59,60c59 < "machineType": "a2-highgpu-1g", < "onHostMaintenance": "Terminate", --- > "machineType": "n2-standard-4", Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f ocp_4.15_machineset-a2-highgpu-1g.json Example output machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created Verification View the machine set you created by running the following command: USD oc -n openshift-machine-api get machinesets | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m Note Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. 2.5.11. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. 2.6. Creating a compute machine set on IBM Cloud You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Cloud(R). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.6.1. Sample YAML for a compute machine set custom resource on IBM Cloud This sample YAML defines a compute machine set that runs in a specified IBM Cloud(R) zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The node label to add. 4 6 10 The infrastructure ID, node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud(R) instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 2.6.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.6.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.7. Creating a compute machine set on IBM Power Virtual Server You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Power(R) Virtual Server. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.7.1. Sample YAML for a compute machine set custom resource on IBM Power Virtual Server This sample YAML file defines a compute machine set that runs in a specified IBM Power(R) Virtual Server zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: "0.5" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 The node label to add. 4 6 10 The infrastructure ID, node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID within your region to place machines on. 2.7.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.7.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.8. Creating a compute machine set on Nutanix You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.8.1. Sample YAML for a compute machine set custom resource on Nutanix This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI ( oc ). Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 Specify the node label to add. 3 Specify the infrastructure ID, node label, and zone. 4 Annotations for the cluster autoscaler. 5 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.15. 6 Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 7 Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 8 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 9 Specify the amount of memory for the cluster in Gi. 10 Specify the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 11 Specify the size of the system disk in Gi. 12 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 13 Specify the number of vCPU sockets. 14 Specify the number of vCPUs per socket. 2.8.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.8.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.8.4. Failure domains for Nutanix clusters To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required: Modify the cluster infrastructure custom resource (CR). Modify the cluster control plane machine set CR. Modify or replace the compute machine set CRs. For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content. Additional resources Adding failure domains to an existing Nutanix cluster 2.9. Creating a compute machine set on OpenStack You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.9.1. Sample YAML for a compute machine set custom resource on RHOSP This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone> 1 5 7 13 15 16 17 18 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID and node label. 11 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 12 Required for deployments to multiple networks. To specify multiple networks, add another entry in the networks array. Also, you must include the network that is used as the primarySubnet value. 14 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 2.9.2. Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP If you configured your cluster for single-root I/O virtualization (SR-IOV), you can create compute machine sets that use that technology. This sample YAML defines a compute machine set that uses SR-IOV networks. The nodes that it creates are labeled with node-role.openshift.io/<node_role>: "" In this sample, infrastructure_id is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and node_role is the node label to add. The sample assumes two SR-IOV networks that are named "radio" and "uplink". The networks are used in port definitions in the spec.template.spec.providerSpec.value.ports list. Note Only parameters that are specific to SR-IOV deployments are described in this sample. To review a more general sample, see "Sample YAML for a compute machine set custom resource on RHOSP". An example compute machine set that uses SR-IOV networks apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 5 Enter a network UUID for each port. 2 6 Enter a subnet UUID for each port. 3 7 The value of the vnicType parameter must be direct for each port. 4 8 The value of the portSecurity parameter must be false for each port. You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it. Important After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter: USD oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true" Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. Additional resources Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack 2.9.3. Sample YAML for SR-IOV deployments where port security is disabled To create single-root I/O virtualization (SR-IOV) ports on a network that has port security disabled, define a compute machine set that includes the ports as items in the spec.template.spec.providerSpec.value.ports list. This difference from the standard SR-IOV compute machine set is due to the automatic security group and allowed address pair configuration that occurs for ports that are created by using the network and subnet interfaces. Ports that you define for machines subnets require: Allowed address pairs for the API and ingress virtual IP ports The compute security group Attachment to the machines network and subnet Note Only parameters that are specific to SR-IOV deployments where port security is disabled are described in this sample. To review a more general sample, see Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP". An example compute machine set that uses SR-IOV networks and has port security disabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data 1 Specify allowed address pairs for the API and ingress ports. 2 3 Specify the machines network and subnet. 4 Specify the compute machines security group. Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. 2.9.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.9.5. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.10. Creating a compute machine set on vSphere You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.10.1. Sample YAML for a compute machine set custom resource on vSphere This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 11 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 12 Specify the vCenter Datacenter to deploy the compute machine set on. 13 Specify the vCenter Datastore to deploy the compute machine set on. 14 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 15 Specify the vSphere resource pool for your VMs. 16 Specify the vCenter server IP or fully qualified domain name. 2.10.2. Minimum required vCenter privileges for compute machine set management To manage compute machine sets in an OpenShift Container Platform cluster on vCenter, you must use an account with privileges to read, create, and delete the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the minimum required privileges. The following table lists the minimum vCenter roles and privileges that are required to create, scale, and delete compute machine sets and to delete machines in your OpenShift Container Platform cluster. Example 2.1. Minimum vCenter roles and privileges required for compute machine set management vSphere object for role When required Required privileges vSphere vCenter Always InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update 1 StorageProfile.View 1 vSphere vCenter Cluster Always Resource.AssignVMToPool vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder Resource.AssignVMToPool VirtualMachine.Provisioning.DeployTemplate 1 The StorageProfile.Update and StorageProfile.View permissions are required only for storage backends that use the Container Storage Interface (CSI). The following table details the permissions and propagation settings that are required for compute machine set management. Example 2.2. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always Not required Listed required privileges vSphere vCenter Datacenter Existing folder Not required ReadOnly permission Installation program creates the folder Required Listed required privileges vSphere vCenter Cluster Always Required Listed required privileges vSphere vCenter Datastore Always Not required Listed required privileges vSphere Switch Always Not required ReadOnly permission vSphere Port Group Always Not required Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder Required Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. 2.10.3. Requirements for clusters with user-provisioned infrastructure to use compute machine sets To use compute machine sets on clusters that have user-provisioned infrastructure, you must ensure that you cluster configuration supports using the Machine API. Obtaining the infrastructure ID To create compute machine sets, you must be able to supply the infrastructure ID for your cluster. Procedure To obtain the infrastructure ID for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}' Satisfying vSphere credentials requirements To use compute machine sets, the Machine API must be able to interact with vCenter. Credentials that authorize the Machine API components to interact with vCenter must exist in a secret in the openshift-machine-api namespace. Procedure To determine whether the required credentials exist, run the following command: USD oc get secret \ -n openshift-machine-api vsphere-cloud-credentials \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output <vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user> where <vcenter-server> is the IP address or fully qualified domain name (FQDN) of the vCenter server and <openshift-user> and <openshift-user-password> are the OpenShift Container Platform administrator credentials to use. If the secret does not exist, create it by running the following command: USD oc create secret generic vsphere-cloud-credentials \ -n openshift-machine-api \ --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password> Satisfying Ignition configuration requirements Provisioning virtual machines (VMs) requires a valid Ignition configuration. The Ignition configuration contains the machine-config-server address and a system trust bundle for obtaining further Ignition configurations from the Machine Config Operator. By default, this configuration is stored in the worker-user-data secret in the machine-api-operator namespace. Compute machine sets reference the secret during the machine creation process. Procedure To determine whether the required secret exists, run the following command: USD oc get secret \ -n openshift-machine-api worker-user-data \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output disableTemplating: false userData: 1 { "ignition": { ... }, ... } 1 The full output is omitted here, but should have this format. If the secret does not exist, create it by running the following command: USD oc create secret generic worker-user-data \ -n openshift-machine-api \ --from-file=<installation_directory>/worker.ign where <installation_directory> is the directory that was used to store your installation assets during cluster installation. Additional resources Understanding the Machine Config Operator Installing RHCOS and starting the OpenShift Container Platform bootstrap process 2.10.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Note Clusters that are installed with user-provisioned infrastructure have a different networking stack than clusters with infrastructure that is provisioned by the installation program. As a result of this difference, automatic load balancer management is unsupported on clusters that have user-provisioned infrastructure. For these clusters, a compute machine set can only create worker and infra type machines. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Have the necessary permissions to deploy VMs in your vCenter instance and have the required access to the datastore specified. If your cluster uses user-provisioned infrastructure, you have satisfied the specific Machine API requirements for that configuration. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values: Example vSphere providerSpec values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... template: ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" numCPUs: 4 numCoresPerSocket: 4 snapshot: "" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_datacenter_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4 1 The name of the secret in the openshift-machine-api namespace that contains the required vCenter credentials. 2 The name of the RHCOS VM template for your cluster that was created during installation. 3 The name of the secret in the openshift-machine-api namespace that contains the required Ignition configuration credentials. 4 The IP address or fully qualified domain name (FQDN) of the vCenter server. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.10.5. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.11. Creating a compute machine set on bare metal You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on bare metal. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.11.1. Sample YAML for a compute machine set custom resource on bare metal This sample YAML defines a compute machine set that runs on bare metal and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Edit the checksum URL to use the API VIP address. 11 Edit the url URL to use the API VIP address. 2.11.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.11.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags", "spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp", "spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5", "providerSpec: value: metadataServiceOptions: authentication: Required 1", "providerSpec: placement: tenancy: dedicated", "providerSpec: value: spotMarketOptions: {}", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.28.5 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.28.5 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.28.5", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h", "oc get machines -n openshift-machine-api | grep worker", "preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h", "oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>", "jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json \"g4dn.xlarge\"", "oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -", "10c10 < \"name\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\", --- > \"name\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\", 21c21 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 31c31 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 60c60 < \"instanceType\": \"g4dn.xlarge\", --- > \"instanceType\": \"m5.xlarge\",", "oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json", "machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created", "oc -n openshift-machine-api get machinesets | grep gpu", "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s", "oc -n openshift-machine-api get machines | grep gpu", "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "providerSpec: value: spotVMOptions: {}", "oc edit machineset <machine-set-name>", "providerSpec: value: osDisk: diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4", "oc create -f <machine-set-config>.yaml", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2", "\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4", "oc create -f <machine-set-name>.yaml", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m", "oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml", "cat machineset-azure.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"0\" machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T14:08:19Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"23601\" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1", "cp machineset-azure.yaml machineset-azure-gpu.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"1\" machine.openshift.io/memoryMb: \"28672\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T20:27:12Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"166285\" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1", "diff machineset-azure.yaml machineset-azure-gpu.yaml", "14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3", "oc create -f machineset-azure-gpu.yaml", "machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m", "oc get nodes", "NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.28.5 myclustername-master-1 Ready control-plane,master 6h41m v1.28.5 myclustername-master-2 Ready control-plane,master 6h39m v1.28.5 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.28.5 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.28.5 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.28.5 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.28.5", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h", "oc create -f machineset-azure-gpu.yaml", "get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h", "oc get machineset -n openshift-machine-api | grep gpu", "myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m", "oc -n openshift-machine-api get machines | grep gpu", "myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3", "providerSpec: value: preemptible: true", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3", "providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5", "machineType: a2-highgpu-1g onHostMaintenance: Terminate", "{ \"apiVersion\": \"machine.openshift.io/v1beta1\", \"kind\": \"MachineSet\", \"metadata\": { \"annotations\": { \"machine.openshift.io/GPU\": \"0\", \"machine.openshift.io/memoryMb\": \"16384\", \"machine.openshift.io/vCPU\": \"4\" }, \"creationTimestamp\": \"2023-01-13T17:11:02Z\", \"generation\": 1, \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\" }, \"name\": \"myclustername-2pt9p-worker-gpu-a\", \"namespace\": \"openshift-machine-api\", \"resourceVersion\": \"20185\", \"uid\": \"2daf4712-733e-4399-b4b4-d43cb1ed32bd\" }, \"spec\": { \"replicas\": 1, \"selector\": { \"matchLabels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"template\": { \"metadata\": { \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machine-role\": \"worker\", \"machine.openshift.io/cluster-api-machine-type\": \"worker\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"spec\": { \"lifecycleHooks\": {}, \"metadata\": {}, \"providerSpec\": { \"value\": { \"apiVersion\": \"machine.openshift.io/v1beta1\", \"canIPForward\": false, \"credentialsSecret\": { \"name\": \"gcp-cloud-credentials\" }, \"deletionProtection\": false, \"disks\": [ { \"autoDelete\": true, \"boot\": true, \"image\": \"projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64\", \"labels\": null, \"sizeGb\": 128, \"type\": \"pd-ssd\" } ], \"kind\": \"GCPMachineProviderSpec\", \"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\", \"metadata\": { \"creationTimestamp\": null }, \"networkInterfaces\": [ { \"network\": \"myclustername-2pt9p-network\", \"subnetwork\": \"myclustername-2pt9p-worker-subnet\" } ], \"preemptible\": true, \"projectID\": \"myteam\", \"region\": \"us-central1\", \"serviceAccounts\": [ { \"email\": \"[email protected]\", \"scopes\": [ \"https://www.googleapis.com/auth/cloud-platform\" ] } ], \"tags\": [ \"myclustername-2pt9p-worker\" ], \"userDataSecret\": { \"name\": \"worker-user-data\" }, \"zone\": \"us-central1-a\" } } } } }, \"status\": { \"availableReplicas\": 1, \"fullyLabeledReplicas\": 1, \"observedGeneration\": 1, \"readyReplicas\": 1, \"replicas\": 1 } }", "oc get nodes", "NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.28.5 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.28.5 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.28.5 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.28.5 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.28.5 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.28.5 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.28.5", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h", "oc get machines -n openshift-machine-api | grep worker", "myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h", "oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>", "jq .spec.template.spec.providerSpec.value.machineType ocp_4.15_machineset-a2-highgpu-1g.json \"a2-highgpu-1g\"", "\"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\",", "oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.15_machineset-a2-highgpu-1g.json -", "15c15 < \"name\": \"myclustername-2pt9p-worker-gpu-a\", --- > \"name\": \"myclustername-2pt9p-worker-a\", 25c25 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 34c34 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 59,60c59 < \"machineType\": \"a2-highgpu-1g\", < \"onHostMaintenance\": \"Terminate\", --- > \"machineType\": \"n2-standard-4\",", "oc create -f ocp_4.15_machineset-a2-highgpu-1g.json", "machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created", "oc -n openshift-machine-api get machinesets | grep gpu", "myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m", "oc -n openshift-machine-api get machines | grep gpu", "myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: \"0.5\" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>", "oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>", "oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>", "oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "disableTemplating: false userData: 1 { \"ignition\": { }, }", "oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_datacenter_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_management/managing-compute-machines-with-the-machine-api
Part III. Configuring the Product
Part III. Configuring the Product
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/part-configuring_the_product
probe::socket.receive
probe::socket.receive Name probe::socket.receive - Message received on a socket. Synopsis socket.receive Values name Name of this probe protocol Protocol value family Protocol family value success Was send successful? (1 = yes, 0 = no) state Socket state value flags Socket flags value size Size of message received (in bytes) or error code if success = 0 type Socket type value Context The message receiver
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-socket-receive
5.9.5. Mounting File Systems Automatically with /etc/fstab
5.9.5. Mounting File Systems Automatically with /etc/fstab When a Red Hat Enterprise Linux system is newly-installed, all the disk partitions defined and/or created during the installation are configured to be automatically mounted whenever the system boots. However, what happens when additional disk drives are added to a system after the installation is done? The answer is "nothing" because the system was not configured to mount them automatically. However, this is easily changed. The answer lies in the /etc/fstab file. This file is used to control what file systems are mounted when the system boots, as well as to supply default values for other file systems that may be mounted manually from time to time. Here is a sample /etc/fstab file: Each line represents one file system and contains the following fields: File system specifier -- For disk-based file systems, either a device file name ( /dev/sda1 ), a file system label specification ( LABEL=/ ), or a devlabel -managed symbolic link ( /dev/homedisk ) Mount point -- Except for swap partitions, this field specifies the mount point to be used when the file system is mounted ( /boot ) File system type -- The type of file system present on the specified device (note that auto may be specified to select automatic detection of the file system to be mounted, which is handy for removable media units such as diskette drives) Mount options -- A comma-separated list of options that can be used to control mount 's behavior ( noauto,owner,kudzu ) Dump frequency -- If the dump backup utility is used, the number in this field controls dump 's handling of the specified file system File system check order -- Controls the order in which the file system checker fsck checks the integrity of the file systems
[ "LABEL=/ / ext3 defaults 1 1 /dev/sda1 /boot ext3 defaults 1 2 /dev/cdrom /mnt/cdrom iso9660 noauto,owner,kudzu,ro 0 0 /dev/homedisk /home ext3 defaults 1 2 /dev/sda2 swap swap defaults 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-mount-fstab
Appendix A. Troubleshooting containerized Ansible Automation Platform
Appendix A. Troubleshooting containerized Ansible Automation Platform Use this information to troubleshoot your containerized Ansible Automation Platform installation. A.1. Diagnosing the problem For general container-based troubleshooting, you can inspect the container logs for any running service to help troubleshoot underlying issues. Identifying the running containers To get a list of the running container names run the following command: USD podman ps --all --format "{{.Names}}" Example output: postgresql redis-unix redis-tcp receptor automation-controller-rsyslog automation-controller-task automation-controller-web automation-eda-api automation-eda-daphne automation-eda-web automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2 automation-eda-scheduler automation-gateway-proxy automation-gateway automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2 Inspecting the logs To inspect any running container logs run the journalctl command: USD journalctl CONTAINER_NAME=<container_name> Example command with output: USD journalctl CONTAINER_NAME=automation-gateway-proxy Oct 08 01:40:12 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 00:40:12.885][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> Oct 08 01:40:12 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 00:40:12.885][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> Oct 08 01:40:19 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T00:40:16.753Z] "GET /up HTTP/1.1" 200 - 0 1138 10 0 "192.0.2.1" "python-> To view the logs of a running container in real-time, run the podman logs -f command: USD podman logs -f <container_name> Controlling container operations You can control operations for a container by running the systemctl command: USD systemctl --user status <container_name> Example command with output: USD systemctl --user status automation-gateway-proxy ● automation-gateway-proxy.service - Podman automation-gateway-proxy.service Loaded: loaded (/home/user/.config/systemd/user/automation-gateway-proxy.service; enabled; preset: disabled) Active: active (running) since Mon 2024-10-07 12:39:23 BST; 23h ago Docs: man:podman-generate-systemd(1) Process: 780 ExecStart=/usr/bin/podman start automation-gateway-proxy (code=exited, status=0/SUCCESS) Main PID: 1919 (conmon) Tasks: 1 (limit: 48430) Memory: 852.0K CPU: 2.996s CGroup: /user.slice/user-1000.slice/[email protected]/app.slice/automation-gateway-proxy.service └─1919 /usr/bin/conmon --api-version 1 -c 2dc3c7b2cecd73010bad1e0aaa806015065f92556ed3591c9d2084d7ee209c7a -u 2dc3c7b2cecd73010bad1e0aaa80> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:02.926Z] "GET /api/galaxy/_ui/v1/settings/ HTTP/1.1" 200 - 0 654 58 47 "> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.387Z] "GET /api/controller/v2/config/ HTTP/1.1" 200 - 0 4018 58 44 "1> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.370Z] "GET /api/galaxy/v3/plugin/ansible/search/collection-versions/?> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.405Z] "GET /api/controller/v2/organizations/?role_level=notification_> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.366Z] "GET /api/galaxy/_ui/v1/me/ HTTP/1.1" 200 - 0 1368 79 40 "192.1> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.360Z] "GET /api/controller/v2/workflow_approvals/?page_size=200&statu> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.379Z] "GET /api/controller/v2/job_templates/7/ HTTP/1.1" 200 - 0 1356> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.378Z] "GET /api/galaxy/_ui/v1/feature-flags/ HTTP/1.1" 200 - 0 207 81> Oct 08 11:44:13 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 10:44:13.856][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> Oct 08 11:44:13 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 10:44:13.856][2][info][upstream] [external/envoy/source/common/upstream/cds_ap Getting container information about the execution plane To get container information about automation controller, Event-Driven Ansible, and execution_nodes nodes, prefix any Podman commands with either: CONTAINER_HOST=unix://run/user/<user_id>/podman/podman.sock or CONTAINERS_STORAGE_CONF=<user_home_directory>/aap/containers/storage.conf Example with output: USD CONTAINER_HOST=unix://run/user/1000/podman/podman.sock podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ansible-automation-platform-25/ee-supported-rhel8 latest 59d1bc680a7c 6 days ago 2.24 GB registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel8 latest a64b9fc48094 6 days ago 338 MB A.2. Troubleshooting containerized Ansible Automation Platform installation The installation takes a long time, or has errors, what should I check? Ensure your system meets the minimum requirements as outlined in the installation guide. Items such as improper storage choices and high latency when distributing across many hosts will all have a significant impact. Check the installation log file located by default at ./aap_install.log unless otherwise changed within the local installer ansible.cfg . Enable task profiling callbacks on an ad hoc basis to give an overview of where the installation program spends the most time. To do this, use the local ansible.cfg file. Add a callback line such as this under the [defaults] section: USD cat ansible.cfg [defaults] callbacks_enabled = ansible.posix.profile_tasks Automation controller returns an error of 413 This error is due to manifest.zip license files that are larger than the nginx_client_max_body_size setting. If this error occurs, you will need to change the installation inventory file to include the following variables: nginx_disable_hsts=false nginx_http_port=8081 nginx_https_port=8444 nginx_client_max_body_size=20m nginx_user_headers=[] The current default setting of 20m should be enough to avoid this issue. The installation failed with a "502 Bad Gateway" when going to the controller UI. This error can occur and manifest itself in the installation application output as: TASK [ansible.containerized_installer.automationcontroller : Wait for the Controller API to te ready] ****************************************************** fatal: [daap1.lan]: FAILED! => {"changed": false, "connection": "close", "content_length": "150", "content_type": "text/html", "date": "Fri, 29 Sep 2023 09:42:32 GMT", "elapsed": 0, "msg": "Status code was 502 and not [200]: HTTP Error 502: Bad Gateway", "redirected": false, "server": "nginx", "status": 502, "url": "https://daap1.lan:443/api/v2/ping/"} Check if you have an automation-controller-web container running and a systemd service. Note This is used at the regular unprivileged user not system wide level. If you have used su to switch to the user running the containers, you must set your XDG_RUNTIME_DIR environment variable to the correct value to be able to interact with the user systemctl units. Run the command export XDG_RUNTIME_DIR="/run/user/USDUID" . podman ps | grep web systemctl --user | grep web No output indicates a problem. Try restarting the automation-controller-web service: systemctl start automation-controller-web.service --user systemctl --user | grep web systemctl status automation-controller-web.service --user Sep 29 10:55:16 daap1.lan automation-controller-web[29875]: nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) Sep 29 10:55:16 daap1.lan automation-controller-web[29875]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) The output indicates that the port is already, or still, in use by another service. In this case nginx . Run: sudo pkill nginx Restart and status check the web service again. Normal service output should look similar to the following, and should still be running: Sep 29 10:59:26 daap1.lan automation-controller-web[30274]: WSGI app 0 (mountpoint='/') ready in 3 seconds on interpreter 0x1a458c10 pid: 17 (default app) Sep 29 10:59:26 daap1.lan automation-controller-web[30274]: WSGI app 0 (mountpoint='/') ready in 3 seconds on interpreter 0x1a458c10 pid: 20 (default app) Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,043 INFO [-] daphne.cli Starting server at tcp:port=8051:interface=127.0.> Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,043 INFO Starting server at tcp:port=8051:interface=127.0.0.1 Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,048 INFO [-] daphne.server HTTP/2 support not enabled (install the http2 > Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,048 INFO HTTP/2 support not enabled (install the http2 and tls Twisted ex> Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,049 INFO [-] daphne.server Configuring endpoint tcp:port=8051:interface=1> Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,049 INFO Configuring endpoint tcp:port=8051:interface=127.0.0.1 Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,051 INFO [-] daphne.server Listening on TCP address 127.0.0.1:8051 Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,051 INFO Listening on TCP address 127.0.0.1:8051 Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: nginx entered RUNNING state, process has stayed up for > th> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: nginx entered RUNNING state, process has stayed up for > th> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: uwsgi entered RUNNING state, process has stayed up for > th> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: uwsgi entered RUNNING state, process has stayed up for > th> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: daphne entered RUNNING state, process has stayed up for > t> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: daphne entered RUNNING state, process has stayed up for > t> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: ws-heartbeat entered RUNNING state, process has stayed up f> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: ws-heartbeat entered RUNNING state, process has stayed up f> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: cache-clear entered RUNNING state, process has stayed up fo> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: cache-clear entered RUNNING state, process has stayed up You can run the installation program again to ensure everything installs as expected. When attempting to install containerized Ansible Automation Platform in Amazon Web Services you receive output that there is no space left on device If you are installing a /home filesystem into a default Amazon Web Services marketplace RHEL instance, it might be too small since /home is part of the root / filesystem. You will need to make more space available. The documentation specifies a minimum of 40GB for a single-node deployment of containerized Ansible Automation Platform. "Install container tools" task fails due to unavailable packages This error occurs in the installation application output as: TASK [ansible.containerized_installer.common : Install container tools] ********************************************************************************************************** fatal: [192.0.2.1]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} fatal: [192.0.2.2]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} fatal: [192.0.2.3]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} fatal: [192.0.2.4]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} fatal: [192.0.2.5]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} To fix this error, run the following command on the target hosts: sudo subscription-manager register A.3. Troubleshooting containerized Ansible Automation Platform configuration Sometimes the post install for seeding my Ansible Automation Platform content errors out. This could manifest itself as output similar to this: The infra.controller_configuration.dispatch role uses an asynchronous loop with 30 retries to apply each configuration type, and the default delay between retries is 1 second. If the configuration is large, this might not be enough time to apply everything before the last retry occurs. Increase the retry delay by setting the controller_configuration_async_delay variable to something other than 1 second. For example, setting it to 2 seconds doubles the retry time. The place to do this would be in the repository where the controller configuration is defined. It could also be added to the [all:vars] section of the installation program inventory file. A few instances have shown that no additional modification is required, and re-running the installation program again worked. A.4. Containerized Ansible Automation Platform reference Can you give details of the architecture for the Ansible Automation Platform containerized design? We use as much of the underlying native Red Hat Enterprise Linux technology as possible. Podman is used for the container runtime and management of services. Use podman ps to list the running containers on the system: USD podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 88ed40495117 registry.redhat.io/rhel8/postgresql-13:latest run-postgresql 48 minutes ago Up 47 minutes postgresql 8f55ba612f04 registry.redhat.io/rhel8/redis-6:latest run-redis 47 minutes ago Up 47 minutes redis 56c40445c590 registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8:latest /usr/bin/receptor... 47 minutes ago Up 47 minutes receptor f346f05d56ee registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 47 minutes ago Up 45 minutes automation-controller-rsyslog 26e3221963e3 registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 46 minutes ago Up 45 minutes automation-controller-task c7ac92a1e8a1 registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 46 minutes ago Up 28 minutes automation-controller-web Use podman images to display information about locally stored images: USD podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8 latest b497bdbee59e 10 days ago 3.16 GB registry.redhat.io/ansible-automation-platform-24/controller-rhel8 latest ed8ebb1c1baa 10 days ago 1.48 GB registry.redhat.io/rhel8/redis-6 latest 78905519bb05 2 weeks ago 357 MB registry.redhat.io/rhel8/postgresql-13 latest 9b65bc3d0413 2 weeks ago 765 MB Containerized Ansible Automation Platform runs as rootless containers for enhanced security by default. This means you can install containerized Ansible Automation Platform by using any local unprivileged user account. Privilege escalation is only needed for certain root level tasks, and by default is not needed to use root directly. The installation program adds the following files to the filesystem where you run the installation program on the underlying Red Hat Enterprise Linux host: USD tree -L 1 . β”œβ”€β”€ aap_install.log β”œβ”€β”€ ansible.cfg β”œβ”€β”€ collections β”œβ”€β”€ galaxy.yml β”œβ”€β”€ inventory β”œβ”€β”€ LICENSE β”œβ”€β”€ meta β”œβ”€β”€ playbooks β”œβ”€β”€ plugins β”œβ”€β”€ README.md β”œβ”€β”€ requirements.yml β”œβ”€β”€ roles The installation root directory includes other containerized services that make use of Podman volumes for example. Here are some examples for further reference: The containers directory includes some of the Podman specifics used and installed for the execution plane: containers/ β”œβ”€β”€ podman β”œβ”€β”€ storage β”‚ β”œβ”€β”€ defaultNetworkBackend β”‚ β”œβ”€β”€ libpod β”‚ β”œβ”€β”€ networks β”‚ β”œβ”€β”€ overlay β”‚ β”œβ”€β”€ overlay-containers β”‚ β”œβ”€β”€ overlay-images β”‚ β”œβ”€β”€ overlay-layers β”‚ β”œβ”€β”€ storage.lock β”‚ └── userns.lock └── storage.conf The controller directory has some of the installed configuration and runtime data points: controller/ β”œβ”€β”€ data β”‚ β”œβ”€β”€ job_execution β”‚ β”œβ”€β”€ projects β”‚ └── rsyslog β”œβ”€β”€ etc β”‚ β”œβ”€β”€ conf.d β”‚ β”œβ”€β”€ launch_awx_task.sh β”‚ β”œβ”€β”€ settings.py β”‚ β”œβ”€β”€ tower.cert β”‚ └── tower.key β”œβ”€β”€ nginx β”‚ └── etc β”œβ”€β”€ rsyslog β”‚ └── run └── supervisor └── run The receptor directory has the automation mesh configuration: receptor/ β”œβ”€β”€ etc β”‚ └── receptor.conf └── run β”œβ”€β”€ receptor.sock └── receptor.sock.lock After installation, you will also find other pieces in the local users home directory such as the .cache directory: .cache/ β”œβ”€β”€ containers β”‚ └── short-name-aliases.conf.lock └── rhsm └── rhsm.log As services are run using rootless Podman by default, you can use other services such as running systemd as non-privileged users. Under systemd you can see some of the component service controls available: The .config directory: .config/ β”œβ”€β”€ cni β”‚ └── net.d β”‚ └── cni.lock β”œβ”€β”€ containers β”‚ β”œβ”€β”€ auth.json β”‚ └── containers.conf └── systemd └── user β”œβ”€β”€ automation-controller-rsyslog.service β”œβ”€β”€ automation-controller-task.service β”œβ”€β”€ automation-controller-web.service β”œβ”€β”€ default.target.wants β”œβ”€β”€ podman.service.d β”œβ”€β”€ postgresql.service β”œβ”€β”€ receptor.service β”œβ”€β”€ redis.service └── sockets.target.wants This is specific to Podman and conforms to the Open Container Initiative (OCI) specifications. When you run Podman as the root user /var/lib/containers is used by default, for standard users the hierarchy under USDHOME/.local is used. The .local directory: .local/ └── share └── containers β”œβ”€β”€ cache β”œβ”€β”€ podman └── storage As an example .local/storage/volumes contains what the output from podman volume ls provides: USD podman volume ls DRIVER VOLUME NAME local d73d3fe63a957bee04b4853fd38c39bf37c321d14fdab9ee3c9df03645135788 local postgresql local redis_data local redis_etc local redis_run The execution plane is isolated from the control plane main services to ensure it does not affect the main services. Control plane services Control plane services run with the standard Podman configuration and can be found in: ~/.local/share/containers/storage . Execution plane services Execution plane services (automation controller, Event-Driven Ansible and execution nodes) use a dedicated configuration found in ~/aap/containers/storage.conf . This separation prevents execution plane containers from affecting the control plane services. You can view the execution plane configuration with one of the following commands: CONTAINERS_STORAGE_CONF=~/aap/containers/storage.conf podman <subcommand> CONTAINER_HOST=unix://run/user/<user uid>/podman/podman.sock podman <subcommand> How can I see host resource utilization statistics? Run: USD podman container stats -a ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS CPU TIME AVG CPU % 0d5d8eb93c18 automation-controller-web 0.23% 959.1MB / 3.761GB 25.50% 0B / 0B 0B / 0B 16 20.885142s 1.19% 3429d559836d automation-controller-rsyslog 0.07% 144.5MB / 3.761GB 3.84% 0B / 0B 0B / 0B 6 4.099565s 0.23% 448d0bae0942 automation-controller-task 1.51% 633.1MB / 3.761GB 16.83% 0B / 0B 0B / 0B 33 34.285272s 1.93% 7f140e65b57e receptor 0.01% 5.923MB / 3.761GB 0.16% 0B / 0B 0B / 0B 7 1.010613s 0.06% c1458367ca9c redis 0.48% 10.52MB / 3.761GB 0.28% 0B / 0B 0B / 0B 5 9.074042s 0.47% ef712cc2dc89 postgresql 0.09% 21.88MB / 3.761GB 0.58% 0B / 0B 0B / 0B 21 15.571059s 0.80% The is an example of a Dell sold and offered containerized Ansible Automation Platform solution (DAAP) install and utilizes ~1.8Gb RAM. How much storage is used and where? The container volume storage is under the local user at USDHOME/.local/share/containers/storage/volumes . To view the details of each volume run: USD podman volume ls Then run: USD podman volume inspect <volume_name> Here is an example: USD podman volume inspect postgresql [ { "Name": "postgresql", "Driver": "local", "Mountpoint": "/home/aap/.local/share/containers/storage/volumes/postgresql/_data", "CreatedAt": "2024-01-08T23:39:24.983964686Z", "Labels": {}, "Scope": "local", "Options": {}, "MountCount": 0, "NeedsCopyUp": true } ] Several files created by the installation program are located in USDHOME/aap/ and bind-mounted into various running containers. To view the mounts associated with a container run: USD podman ps --format "{{.ID}}\t{{.Command}}\t{{.Names}}" 89e779b81b83 run-postgresql postgresql 4c33cc77ef7d run-redis redis 3d8a028d892d /usr/bin/receptor... receptor 09821701645c /usr/bin/launch_a... automation-controller-rsyslog a2ddb5cac71b /usr/bin/launch_a... automation-controller-task fa0029a3b003 /usr/bin/launch_a... automation-controller-web 20f192534691 gunicorn --bind 1... automation-eda-api f49804c7e6cb daphne -b 127.0.0... automation-eda-daphne d340b9c1cb74 /bin/sh -c nginx ... automation-eda-web 111f47de5205 aap-eda-manage rq... automation-eda-worker-1 171fcb1785af aap-eda-manage rq... automation-eda-worker-2 049d10555b51 aap-eda-manage rq... automation-eda-activation-worker-1 7a78a41a8425 aap-eda-manage rq... automation-eda-activation-worker-2 da9afa8ef5e2 aap-eda-manage sc... automation-eda-scheduler 8a2958be9baf gunicorn --name p... automation-hub-api 0a8b57581749 gunicorn --name p... automation-hub-content 68005b987498 nginx -g daemon o... automation-hub-web cb07af77f89f pulpcore-worker automation-hub-worker-1 a3ba05136446 pulpcore-worker automation-hub-worker-2 Then run: USD podman inspect <container_name> | jq -r .[].Mounts[].Source /home/aap/.local/share/containers/storage/volumes/receptor_run/_data /home/aap/.local/share/containers/storage/volumes/redis_run/_data /home/aap/aap/controller/data/rsyslog /home/aap/aap/controller/etc/tower.key /home/aap/aap/controller/etc/conf.d/callback_receiver_workers.py /home/aap/aap/controller/data/job_execution /home/aap/aap/controller/nginx/etc/controller.conf /home/aap/aap/controller/etc/conf.d/subscription_usage_model.py /home/aap/aap/controller/etc/conf.d/cluster_host_id.py /home/aap/aap/controller/etc/conf.d/insights.py /home/aap/aap/controller/rsyslog/run /home/aap/aap/controller/data/projects /home/aap/aap/controller/etc/settings.py /home/aap/aap/receptor/etc/receptor.conf /home/aap/aap/controller/etc/conf.d/execution_environments.py /home/aap/aap/tls/extracted /home/aap/aap/controller/supervisor/run /home/aap/aap/controller/etc/uwsgi.ini /home/aap/aap/controller/etc/conf.d/container_groups.py /home/aap/aap/controller/etc/launch_awx_task.sh /home/aap/aap/controller/etc/tower.cert If the jq RPM is not installed, install with: USD sudo dnf -y install jq
[ "podman ps --all --format \"{{.Names}}\"", "postgresql redis-unix redis-tcp receptor automation-controller-rsyslog automation-controller-task automation-controller-web automation-eda-api automation-eda-daphne automation-eda-web automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2 automation-eda-scheduler automation-gateway-proxy automation-gateway automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2", "journalctl CONTAINER_NAME=<container_name>", "journalctl CONTAINER_NAME=automation-gateway-proxy Oct 08 01:40:12 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 00:40:12.885][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> Oct 08 01:40:12 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 00:40:12.885][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> Oct 08 01:40:19 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T00:40:16.753Z] \"GET /up HTTP/1.1\" 200 - 0 1138 10 0 \"192.0.2.1\" \"python->", "podman logs -f <container_name>", "systemctl --user status <container_name>", "systemctl --user status automation-gateway-proxy ● automation-gateway-proxy.service - Podman automation-gateway-proxy.service Loaded: loaded (/home/user/.config/systemd/user/automation-gateway-proxy.service; enabled; preset: disabled) Active: active (running) since Mon 2024-10-07 12:39:23 BST; 23h ago Docs: man:podman-generate-systemd(1) Process: 780 ExecStart=/usr/bin/podman start automation-gateway-proxy (code=exited, status=0/SUCCESS) Main PID: 1919 (conmon) Tasks: 1 (limit: 48430) Memory: 852.0K CPU: 2.996s CGroup: /user.slice/user-1000.slice/[email protected]/app.slice/automation-gateway-proxy.service └─1919 /usr/bin/conmon --api-version 1 -c 2dc3c7b2cecd73010bad1e0aaa806015065f92556ed3591c9d2084d7ee209c7a -u 2dc3c7b2cecd73010bad1e0aaa80> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:02.926Z] \"GET /api/galaxy/_ui/v1/settings/ HTTP/1.1\" 200 - 0 654 58 47 \"> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.387Z] \"GET /api/controller/v2/config/ HTTP/1.1\" 200 - 0 4018 58 44 \"1> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.370Z] \"GET /api/galaxy/v3/plugin/ansible/search/collection-versions/?> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.405Z] \"GET /api/controller/v2/organizations/?role_level=notification_> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.366Z] \"GET /api/galaxy/_ui/v1/me/ HTTP/1.1\" 200 - 0 1368 79 40 \"192.1> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.360Z] \"GET /api/controller/v2/workflow_approvals/?page_size=200&statu> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.379Z] \"GET /api/controller/v2/job_templates/7/ HTTP/1.1\" 200 - 0 1356> Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.378Z] \"GET /api/galaxy/_ui/v1/feature-flags/ HTTP/1.1\" 200 - 0 207 81> Oct 08 11:44:13 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 10:44:13.856][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> Oct 08 11:44:13 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 10:44:13.856][2][info][upstream] [external/envoy/source/common/upstream/cds_ap", "CONTAINER_HOST=unix://run/user/<user_id>/podman/podman.sock", "CONTAINERS_STORAGE_CONF=<user_home_directory>/aap/containers/storage.conf", "CONTAINER_HOST=unix://run/user/1000/podman/podman.sock podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ansible-automation-platform-25/ee-supported-rhel8 latest 59d1bc680a7c 6 days ago 2.24 GB registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel8 latest a64b9fc48094 6 days ago 338 MB", "cat ansible.cfg [defaults] callbacks_enabled = ansible.posix.profile_tasks", "nginx_disable_hsts=false nginx_http_port=8081 nginx_https_port=8444 nginx_client_max_body_size=20m nginx_user_headers=[]", "TASK [ansible.containerized_installer.automationcontroller : Wait for the Controller API to te ready] ****************************************************** fatal: [daap1.lan]: FAILED! => {\"changed\": false, \"connection\": \"close\", \"content_length\": \"150\", \"content_type\": \"text/html\", \"date\": \"Fri, 29 Sep 2023 09:42:32 GMT\", \"elapsed\": 0, \"msg\": \"Status code was 502 and not [200]: HTTP Error 502: Bad Gateway\", \"redirected\": false, \"server\": \"nginx\", \"status\": 502, \"url\": \"https://daap1.lan:443/api/v2/ping/\"}", "ps | grep web", "systemctl --user | grep web", "systemctl start automation-controller-web.service --user", "systemctl --user | grep web", "systemctl status automation-controller-web.service --user", "Sep 29 10:55:16 daap1.lan automation-controller-web[29875]: nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) Sep 29 10:55:16 daap1.lan automation-controller-web[29875]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)", "sudo pkill nginx", "Sep 29 10:59:26 daap1.lan automation-controller-web[30274]: WSGI app 0 (mountpoint='/') ready in 3 seconds on interpreter 0x1a458c10 pid: 17 (default app) Sep 29 10:59:26 daap1.lan automation-controller-web[30274]: WSGI app 0 (mountpoint='/') ready in 3 seconds on interpreter 0x1a458c10 pid: 20 (default app) Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,043 INFO [-] daphne.cli Starting server at tcp:port=8051:interface=127.0.> Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,043 INFO Starting server at tcp:port=8051:interface=127.0.0.1 Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,048 INFO [-] daphne.server HTTP/2 support not enabled (install the http2 > Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,048 INFO HTTP/2 support not enabled (install the http2 and tls Twisted ex> Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,049 INFO [-] daphne.server Configuring endpoint tcp:port=8051:interface=1> Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,049 INFO Configuring endpoint tcp:port=8051:interface=127.0.0.1 Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,051 INFO [-] daphne.server Listening on TCP address 127.0.0.1:8051 Sep 29 10:59:27 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:27,051 INFO Listening on TCP address 127.0.0.1:8051 Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: nginx entered RUNNING state, process has stayed up for > th> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: nginx entered RUNNING state, process has stayed up for > th> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: uwsgi entered RUNNING state, process has stayed up for > th> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: uwsgi entered RUNNING state, process has stayed up for > th> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: daphne entered RUNNING state, process has stayed up for > t> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: daphne entered RUNNING state, process has stayed up for > t> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: ws-heartbeat entered RUNNING state, process has stayed up f> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: ws-heartbeat entered RUNNING state, process has stayed up f> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: cache-clear entered RUNNING state, process has stayed up fo> Sep 29 10:59:54 daap1.lan automation-controller-web[30274]: 2023-09-29 09:59:54,139 INFO success: cache-clear entered RUNNING state, process has stayed up", "TASK [ansible.containerized_installer.automationcontroller : Create the receptor container] *************************************************** fatal: [ec2-13-48-25-168.eu-north-1.compute.amazonaws.com]: FAILED! => {\"changed\": false, \"msg\": \"Can't create container receptor\", \"stderr\": \"Error: creating container storage: creating an ID-mapped copy of layer \\\"98955f43cc908bd50ff43585fec2c7dd9445eaf05eecd1e3144f93ffc00ed4ba\\\": error during chown: storage-chown-by-maps: lchown usr/local/lib/python3.9/site-packages/azure/mgmt/network/v2019_11_01/operations/__pycache__/_available_service_aliases_operations.cpython-39.pyc: no space left on device: exit status 1\\n\", \"stderr_lines\": [\"Error: creating container storage: creating an ID-mapped copy of layer \\\"98955f43cc908bd50ff43585fec2c7dd9445eaf05eecd1e3144f93ffc00ed4ba\\\": error during chown: storage-chown-by-maps: lchown usr/local/lib/python3.9/site-packages/azure/mgmt/network/v2019_11_01/operations/__pycache__/_available_service_aliases_operations.cpython-39.pyc: no space left on device: exit status 1\"], \"stdout\": \"\", \"stdout_lines\": []}", "TASK [ansible.containerized_installer.common : Install container tools] ********************************************************************************************************** fatal: [192.0.2.1]: FAILED! => {\"changed\": false, \"failures\": [\"No package crun available.\", \"No package podman available.\", \"No package slirp4netns available.\", \"No package fuse-overlayfs available.\"], \"msg\": \"Failed to install some of the specified packages\", \"rc\": 1, \"results\": []} fatal: [192.0.2.2]: FAILED! => {\"changed\": false, \"failures\": [\"No package crun available.\", \"No package podman available.\", \"No package slirp4netns available.\", \"No package fuse-overlayfs available.\"], \"msg\": \"Failed to install some of the specified packages\", \"rc\": 1, \"results\": []} fatal: [192.0.2.3]: FAILED! => {\"changed\": false, \"failures\": [\"No package crun available.\", \"No package podman available.\", \"No package slirp4netns available.\", \"No package fuse-overlayfs available.\"], \"msg\": \"Failed to install some of the specified packages\", \"rc\": 1, \"results\": []} fatal: [192.0.2.4]: FAILED! => {\"changed\": false, \"failures\": [\"No package crun available.\", \"No package podman available.\", \"No package slirp4netns available.\", \"No package fuse-overlayfs available.\"], \"msg\": \"Failed to install some of the specified packages\", \"rc\": 1, \"results\": []} fatal: [192.0.2.5]: FAILED! => {\"changed\": false, \"failures\": [\"No package crun available.\", \"No package podman available.\", \"No package slirp4netns available.\", \"No package fuse-overlayfs available.\"], \"msg\": \"Failed to install some of the specified packages\", \"rc\": 1, \"results\": []}", "sudo subscription-manager register", "TASK [infra.controller_configuration.projects : Configure Controller Projects | Wait for finish the projects creation] *************************************** Friday 29 September 2023 11:02:32 +0100 (0:00:00.443) 0:00:53.521 ****** FAILED - RETRYING: [daap1.lan]: Configure Controller Projects | Wait for finish the projects creation (1 retries left). failed: [daap1.lan] (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '536962174348.33944', 'results_file': '/home/aap/.ansible_async/536962174348.33944', 'changed': False, '__controller_project_item': {'name': 'AAP Config-As-Code Examples', 'organization': 'Default', 'scm_branch': 'main', 'scm_clean': 'no', 'scm_delete_on_update': 'no', 'scm_type': 'git', 'scm_update_on_launch': 'no', 'scm_url': 'https://github.com/user/repo.git'}, 'ansible_loop_var': '__controller_project_item'}) => {\"__projects_job_async_results_item\": {\"__controller_project_item\": {\"name\": \"AAP Config-As-Code Examples\", \"organization\": \"Default\", \"scm_branch\": \"main\", \"scm_clean\": \"no\", \"scm_delete_on_update\": \"no\", \"scm_type\": \"git\", \"scm_update_on_launch\": \"no\", \"scm_url\": \"https://github.com/user/repo.git\"}, \"ansible_job_id\": \"536962174348.33944\", \"ansible_loop_var\": \"__controller_project_item\", \"changed\": false, \"failed\": 0, \"finished\": 0, \"results_file\": \"/home/aap/.ansible_async/536962174348.33944\", \"started\": 1}, \"ansible_job_id\": \"536962174348.33944\", \"ansible_loop_var\": \"__projects_job_async_results_item\", \"attempts\": 30, \"changed\": false, \"finished\": 0, \"results_file\": \"/home/aap/.ansible_async/536962174348.33944\", \"started\": 1, \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 88ed40495117 registry.redhat.io/rhel8/postgresql-13:latest run-postgresql 48 minutes ago Up 47 minutes postgresql 8f55ba612f04 registry.redhat.io/rhel8/redis-6:latest run-redis 47 minutes ago Up 47 minutes redis 56c40445c590 registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8:latest /usr/bin/receptor... 47 minutes ago Up 47 minutes receptor f346f05d56ee registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 47 minutes ago Up 45 minutes automation-controller-rsyslog 26e3221963e3 registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 46 minutes ago Up 45 minutes automation-controller-task c7ac92a1e8a1 registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 46 minutes ago Up 28 minutes automation-controller-web", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8 latest b497bdbee59e 10 days ago 3.16 GB registry.redhat.io/ansible-automation-platform-24/controller-rhel8 latest ed8ebb1c1baa 10 days ago 1.48 GB registry.redhat.io/rhel8/redis-6 latest 78905519bb05 2 weeks ago 357 MB registry.redhat.io/rhel8/postgresql-13 latest 9b65bc3d0413 2 weeks ago 765 MB", "tree -L 1 . β”œβ”€β”€ aap_install.log β”œβ”€β”€ ansible.cfg β”œβ”€β”€ collections β”œβ”€β”€ galaxy.yml β”œβ”€β”€ inventory β”œβ”€β”€ LICENSE β”œβ”€β”€ meta β”œβ”€β”€ playbooks β”œβ”€β”€ plugins β”œβ”€β”€ README.md β”œβ”€β”€ requirements.yml β”œβ”€β”€ roles", "containers/ β”œβ”€β”€ podman β”œβ”€β”€ storage β”‚ β”œβ”€β”€ defaultNetworkBackend β”‚ β”œβ”€β”€ libpod β”‚ β”œβ”€β”€ networks β”‚ β”œβ”€β”€ overlay β”‚ β”œβ”€β”€ overlay-containers β”‚ β”œβ”€β”€ overlay-images β”‚ β”œβ”€β”€ overlay-layers β”‚ β”œβ”€β”€ storage.lock β”‚ └── userns.lock └── storage.conf", "controller/ β”œβ”€β”€ data β”‚ β”œβ”€β”€ job_execution β”‚ β”œβ”€β”€ projects β”‚ └── rsyslog β”œβ”€β”€ etc β”‚ β”œβ”€β”€ conf.d β”‚ β”œβ”€β”€ launch_awx_task.sh β”‚ β”œβ”€β”€ settings.py β”‚ β”œβ”€β”€ tower.cert β”‚ └── tower.key β”œβ”€β”€ nginx β”‚ └── etc β”œβ”€β”€ rsyslog β”‚ └── run └── supervisor └── run", "receptor/ β”œβ”€β”€ etc β”‚ └── receptor.conf └── run β”œβ”€β”€ receptor.sock └── receptor.sock.lock", ".cache/ β”œβ”€β”€ containers β”‚ └── short-name-aliases.conf.lock └── rhsm └── rhsm.log", ".config/ β”œβ”€β”€ cni β”‚ └── net.d β”‚ └── cni.lock β”œβ”€β”€ containers β”‚ β”œβ”€β”€ auth.json β”‚ └── containers.conf └── systemd └── user β”œβ”€β”€ automation-controller-rsyslog.service β”œβ”€β”€ automation-controller-task.service β”œβ”€β”€ automation-controller-web.service β”œβ”€β”€ default.target.wants β”œβ”€β”€ podman.service.d β”œβ”€β”€ postgresql.service β”œβ”€β”€ receptor.service β”œβ”€β”€ redis.service └── sockets.target.wants", ".local/ └── share └── containers β”œβ”€β”€ cache β”œβ”€β”€ podman └── storage", "podman volume ls DRIVER VOLUME NAME local d73d3fe63a957bee04b4853fd38c39bf37c321d14fdab9ee3c9df03645135788 local postgresql local redis_data local redis_etc local redis_run", "CONTAINERS_STORAGE_CONF=~/aap/containers/storage.conf podman <subcommand>", "CONTAINER_HOST=unix://run/user/<user uid>/podman/podman.sock podman <subcommand>", "podman container stats -a", "ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS CPU TIME AVG CPU % 0d5d8eb93c18 automation-controller-web 0.23% 959.1MB / 3.761GB 25.50% 0B / 0B 0B / 0B 16 20.885142s 1.19% 3429d559836d automation-controller-rsyslog 0.07% 144.5MB / 3.761GB 3.84% 0B / 0B 0B / 0B 6 4.099565s 0.23% 448d0bae0942 automation-controller-task 1.51% 633.1MB / 3.761GB 16.83% 0B / 0B 0B / 0B 33 34.285272s 1.93% 7f140e65b57e receptor 0.01% 5.923MB / 3.761GB 0.16% 0B / 0B 0B / 0B 7 1.010613s 0.06% c1458367ca9c redis 0.48% 10.52MB / 3.761GB 0.28% 0B / 0B 0B / 0B 5 9.074042s 0.47% ef712cc2dc89 postgresql 0.09% 21.88MB / 3.761GB 0.58% 0B / 0B 0B / 0B 21 15.571059s 0.80%", "podman volume ls", "podman volume inspect <volume_name>", "podman volume inspect postgresql [ { \"Name\": \"postgresql\", \"Driver\": \"local\", \"Mountpoint\": \"/home/aap/.local/share/containers/storage/volumes/postgresql/_data\", \"CreatedAt\": \"2024-01-08T23:39:24.983964686Z\", \"Labels\": {}, \"Scope\": \"local\", \"Options\": {}, \"MountCount\": 0, \"NeedsCopyUp\": true } ]", "podman ps --format \"{{.ID}}\\t{{.Command}}\\t{{.Names}}\"", "89e779b81b83 run-postgresql postgresql 4c33cc77ef7d run-redis redis 3d8a028d892d /usr/bin/receptor... receptor 09821701645c /usr/bin/launch_a... automation-controller-rsyslog a2ddb5cac71b /usr/bin/launch_a... automation-controller-task fa0029a3b003 /usr/bin/launch_a... automation-controller-web 20f192534691 gunicorn --bind 1... automation-eda-api f49804c7e6cb daphne -b 127.0.0... automation-eda-daphne d340b9c1cb74 /bin/sh -c nginx ... automation-eda-web 111f47de5205 aap-eda-manage rq... automation-eda-worker-1 171fcb1785af aap-eda-manage rq... automation-eda-worker-2 049d10555b51 aap-eda-manage rq... automation-eda-activation-worker-1 7a78a41a8425 aap-eda-manage rq... automation-eda-activation-worker-2 da9afa8ef5e2 aap-eda-manage sc... automation-eda-scheduler 8a2958be9baf gunicorn --name p... automation-hub-api 0a8b57581749 gunicorn --name p... automation-hub-content 68005b987498 nginx -g daemon o... automation-hub-web cb07af77f89f pulpcore-worker automation-hub-worker-1 a3ba05136446 pulpcore-worker automation-hub-worker-2", "podman inspect <container_name> | jq -r .[].Mounts[].Source", "/home/aap/.local/share/containers/storage/volumes/receptor_run/_data /home/aap/.local/share/containers/storage/volumes/redis_run/_data /home/aap/aap/controller/data/rsyslog /home/aap/aap/controller/etc/tower.key /home/aap/aap/controller/etc/conf.d/callback_receiver_workers.py /home/aap/aap/controller/data/job_execution /home/aap/aap/controller/nginx/etc/controller.conf /home/aap/aap/controller/etc/conf.d/subscription_usage_model.py /home/aap/aap/controller/etc/conf.d/cluster_host_id.py /home/aap/aap/controller/etc/conf.d/insights.py /home/aap/aap/controller/rsyslog/run /home/aap/aap/controller/data/projects /home/aap/aap/controller/etc/settings.py /home/aap/aap/receptor/etc/receptor.conf /home/aap/aap/controller/etc/conf.d/execution_environments.py /home/aap/aap/tls/extracted /home/aap/aap/controller/supervisor/run /home/aap/aap/controller/etc/uwsgi.ini /home/aap/aap/controller/etc/conf.d/container_groups.py /home/aap/aap/controller/etc/launch_awx_task.sh /home/aap/aap/controller/etc/tower.cert", "sudo dnf -y install jq" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-troubleshoot-containerized-aap